- Research Article
- Open Access
Lossy Compression of Noisy Images Based on Visual Quality: A Comprehensive Study
EURASIP Journal on Advances in Signal Processing volume 2010, Article number: 976436 (2010)
This paper concerns lossy compression of images corrupted by additive noise. The main contribution of the paper is that analysis is carried out from the viewpoint of compressed image visual quality. Several coders for which the compression ratio is controlled in different manner are considered. Visual quality metrics that are the most adequate for the considered application (WSNR, MSSIM, PSNR-HVS-M, and PSNR-HVS) are used. It is demonstrated that under certain conditions visual quality of compressed images can be slightly better than quality of original noisy images due to image filtering through lossy compression. The "optimal" parameters of coders for which this positive effect can be observed depend upon standard deviation of the noise. This allows proposing automatic procedure for compressing noisy images in the neighborhood of optimal operation point, that is, when visual quality either improves or degrades insufficiently. Comparison results for a set of grayscale test images and several variances of noise are presented.
Imagesare among the most widespread types of data transferred via communication channels displayed, manipulated and saved in computers or other devices. The amount of images is increasing rapidly which makes transferring and storing them problematic because of several reasons, such as a limited bandwidth of communication channel, a limited memory space, a restricted time available for transferring data to customers, and so forth. Due to this, image compression is required [1, 2]. Lossless coding usually does not produce sufficient compression ratios for many practical applications. Although for certain applications, such as medical imaging and hyperspectral remote sensing, many experts consider lossy compression impossible or not recommended [3, 4], nevertheless they are still used also for these applications (see, for example, [3, 5–7] to mention a few). In the rest of this paper by image compression we will mean a lossy compression if not otherwise specified.
As it is known, image compression introduces visible distortions and artifacts making a compressed image unpleasant. Since most of images are intended (subject) to visualization, it is worth characterizing the quality of a compressed image taking into account human visual system (HVS) [8, 9]. Although conventional quantitative criteria such as MSE and peak signal-to-noise ratio (PSNR) are still widely used in design and comparisons of compression techniques, it has been shown that MSE and PSNR do not produce adequate description of the visual quality [1, 8–13]. Some peculiarities of HVS were taken into account in the standard JPEG and there are the corresponding options in JPEG2000 [8, 14, 15].
However, an intensive research continues in the area of designing and testing visual quality metrics (indices) [12, 16, 17]. This research seems to be far from being complete, although there are many metrics that are able to characterize visual quality of images subject to compression or corrupted by different types of noise adequately enough. The best performance has been demonstrated by the metrics PSNR-HVS-M , PSNR-HVS ,WSNR , and MSSIM  (see data for the subset Safe in the paper ). (The codes for PSNR-HVS-M and PSNR-HVS are available at http://www.ponomarenko.info/, the codes for WSNR and MSSIM can be freely downloaded at http://foulard.ece.cornell.edu/gaubatz/metrix_mux/.)
In many applications it is not taken into account that original image subject to compression could be noisy. Meanwhile, lossy compression of noisy images has certain peculiarities. Research on coders for compressing images contaminated by intensive noise started for radar and astronomic imagery in the middle of 1990th [19–22]. Later, results of similar studies for grayscale optical images appeared [23, 24]. It has been established that compression, under certain conditions, also removes noise from the image to be compressed [5, 19–27]. Thus, when applied to noisy images, compression provides two fold benefits: it decreases the image size considerably and reduces noise. Under certain conditions (for properly selected parameters of compression), this can lead to additional positive outcomes as better classification of noisy remote sensing data  and improved diagnostic quality of medical images . The authors of the paper  state and demonstrate by examples that compressed image visual quality can improve as well.
Note that denoising performed through compression is not as efficient as conventional filtering in terms of PSNR [5, 25, 29]. Improvement of PSNR (IPSNR) achieved through compression in optimal operation point (OOP), where IPSNR is maximal, can reach dB if a compressed image does not contain too much texture regions and noise is intensive . Meanwhile, efficient conventional filters provide IPSNR of about 10 dB for the same noisy images. This is the reason why it is often recommended to apply prefiltering before compression [30, 31]. Although such two-stage procedure is more efficient in the sense of providing better quality of decompressed images, it is more complicated and time consuming. Because of this, we will concentrate on direct application of compression to original data.
Improvement of PSNR (IPSNR) by few dBs does not necessarily lead to better visual quality of compressed images in comparison to original, noisy ones. It has been demonstrated in the paper  that PSNR improvement by few dBs by means of noisy image filtering does not always result in better visual quality of denoised images. However, it is possible to expect that, under certain conditions, compression is able to lead to better visual quality of compressed noisy images. Thus, one goal of this paper is to analyze whether or not improvement of visual quality is possible and under which conditions.
Aforementioned IPSNR in compression of noisy images also depends upon a type of noise and a coder used [25, 29]. To partly simplify the considered situation, assume that noise is zero mean i.i.d. Gaussian. This model is often used as the simplest one for optical, grayscale, and color images [32–34].
Better image coders produce larger IPSNR in the case of compressing noisy image in OOP [25, 26]. Because of this, for our study we have chosen the following three coders: the wavelet-based coder SPIHT  and the DCT-based coders AGU  and ADCTC  (both available at http://www.ponomarenko.info/). (these coders perform considerably better than the standard JPEG. AGU  and ADCTC  outperform JPEG2000  on a set of standard test images. Compression parameters for these three coders can be easily varied. There are automatic procedures that allow reaching OOP for them [25, 26]. Therefore, the second goal of this paper is to compare efficiency of these coders in terms of visual quality of compressed images and, if possible, to give practical recommendations on how to set coder parameters.
Note that there are many papers that consider visual quality of lossy compressed images assumed noise-free [12, 16, 17]. There are less papers dealing with lossy compression of noisy images [19–27]. However, we have not found papers that concern lossy compression of noisy images with taking into account visual quality metrics. The novelty of this paper is in problem statement just in this manner.
The paper is organized as follows. Section 2 gives a brief analysis of peculiarities of compression applied to noisy images in terms of PSNR. Analyzed visual quality metrics are described in Section 3. Image coders used for this study and a methodology of their performance comparison are given in Section 4. Simulation results and discussion are presented in Section 5 and coder performance comparisons and visual examples are given in Section 6. Besides, some practical recommendations are presented there. Finally, the conclusions follow.
2. Brief Analysis of Noisy Image Lossy Compression in Terms of PSNR
Efficiency of image lossy compression is usually analyzed in terms of rate-distortion curves. These curves represent Dependencies of PSNR (or MSE) on bits per pixel (bpp) or compression ratio (CR) where PSNR and MSE are calculated for an original image and the corresponding compressed image
where denote an image size. in (2) relates to an original image that, in general, is noisy, and denotes the decompressed image. These curves for any coder have the same general behavior: PSNR reduces (MSE increases) if bpp becomes smaller (CR increases).
However, nontraditional quantitative criteria are commonly used in the analysis of lossy compression of images corrupted by noise, at least, if one deals with studies for test data. Suppose one has a noise-free test image , a noisy image where noise is artificially added to , and a decompressed image where compression is applied to . Then it is also possible to calculate nontraditional for the pair of images and [23–27] and, respectively,
The curves and might behave in a specific manner [23–27]. They often have minimum and maximum, respectively. This means that according to the criteria and , the quality of decompressed image with respect to noise-free can be better than the quality of the image . This happens due to aforementioned noise filtering effect provided by lossy compression.
Figure 1 gives two examples of the considered Dependencies obtained for the grayscale test image Lena corrupted by Gaussian additive noise with variances and . Two coders are considered, namely, AGU  and JPEG2000 . The presented Dependencies confirm basic statements given above in this section and in Introduction. There are also other interesting observations. CR and bpp that correspond to OOP ( and , resp.) depend upon noise variance. For , is about 0.2 for both coders whilst is about 0.38 if . Thus, larger can be provided if noise is more intensive (CR =8/bpp). for the coder AGU is slightly larger than for JPEG2000 for given . decreases if noise variance increases. More detailed analysis is given in the paper .
Depending upon coder, the CR can be varied in different ways by one or another parameter controlling compression (PCC). For DCT-based coders (variations of JPEG , the coders AGU and ADCTC), CR is controlled by a quantization step (QS) and it is quite difficult to provide a desirable CR since CR also depends upon the properties of the image to be compressed. In turn, for JPEG2000  and SPIHT , the required CR can be approximately provided by setting the corresponding desired bpp. In some applications this is an obvious advantage of the modern wavelet-based coders although, as it will be shown below, this is not the case for the considered application. Note that for JPEG2000 it is also possible to vary QS but not bpp although this is not the basic option .
OOP (defined according to maximum of ) or, at least, its neighborhood can be attained automatically. For example, for the coder AGU  one has to set a quantization step as . To prove this, Figure 2 presents Dependencies of on where . Three values of noise variance (Var) are considered: , 100, and 400. As it is seen, in all three cases, minimal values of are observed for 5, that is, for . More details are given in the paper .
Analysis of the plots in Figures 1 and 2 also shows the following. Filtering efficiency of lossy compression increases in terms of if noise variance increases. IPSNR is about dB for and about dB if for the image Lena if lossy compression in OOP is provided (Figure 1). IPSNR is about 3 dB for and 100 and about 5.5 dB for for the test image Barbara (Figure 2).
Then, if one has a priori knowledge on σ or it is estimated in a blind manner with an appropriate accuracy [39, 40], it becomes possible to reach OOP neighborhood of lossy compression in a fully automatic manner. Moreover, recently it was shown that by setting for AGU and other DCT-based coder ADCTC  it is possible to provide maximal probability of correct classification of compressed multichannel remote sensing images corrupted by additive noise . These results allow expecting that OOP can exist for other than conventional PSNR quality metrics and it can be reached automatically.
3. Analyzed Visual Quality Metrics
It is well known that conventional quality metrics, such as MSE, SNR and PSNR do not always correlate with image visual quality [10, 14, 18, 41]. This resulted in starting intensive design of other full-reference fidelity metrics. Current situation in the field of designing full-reference fidelity metrics is well characterized by the title of invited talk of M. Pedersen at Color Imaging Symposium in Gjovik (Norway, 2009) "111 Full-Reference Image Quality Metrics and Still not Good Enough?" Thus, choice of a proper visual quality metric for analysis and comparisons is always a problem and can be argued. Because of this, we relied on data and conclusions presented in the paper , where a comparison of 18 visual quality metrics have been performed for several subsets of distortion types for the database TID2008 (available at http://www.ponomarenko.info/tid2008.htm). A subset Safe allows considering the following seven types of distortions: additive Gaussian white noise, spatially correlated noise, high frequency noise, impulse noise, Gaussian blur, distortions due to lossy compression by JPEG and JPEG2000. The best performance (in the sense of largest Spearman and Kendall correlation of a quality metric with mean opinion score) for this subset was achieved by the metrics PSNR-HVS-M, PSNR-HVS, and WSNR. Another subset of distortion types called JPEG takes into account only images compressed by JPEG and JPEG2000. For this subset, the best results have been provided by PSNR-HVS-M, PSNR-HVS, and MSSIM. WSNR was the fourth best. Thus, we have decided to use these four visual quality metrics.
The visual quality metrics PSNR-HVS and PSNR-HVS-M can be determined similar to (3):
where and are calculated for decompressed and noise-free images taking into account several aspects of HVS. Note that, similarly to PSNR, both PSNR-HVS and PSNR-HVS-M are expressed in dB and their larger values correspond to a better visual quality of images. The metrics and, respectively, PSNR-HVS account for different sensitivity of human eyes to distortions in low and high spatial frequencies similarly to what JPEG standard does by using nonuniform quantization table [11, 14]. The metrics and, respectively, PSNR-HVS-M additionally take into consideration masking effects . The metrics WSNR (also expressed in dB)  and MSSIM  are able to incorporate several peculiarities of HVS. Note that MSSIM is changing in the limits from 0 to 1 (larger values of MSSIM relate to a better visual quality).
Thesepeculiarities of the metrics do not allow comparing their values between each other. Since no one of them is "perfect", we have analyzed data for all four metrics and drawn conclusions based on joint analysis.
4. Used Coders and Methodology of Their Performance Comparison
Currently, many different methods for image compression are available. As it was already mentioned, for our analysis we have selected three of them. The reasons for choosing them are the following. First, we would like to consider state-of-the-art methods. In this sense, the coders AGU and, especially, ADCTC produce rate/distortion characteristics comparable to or better than JPEG2000 and, consequently, sufficiently better performance than the standard JPEG [36, 37]. Second, all three considered coders are transform-based ones, but AGU and ADCTC are based on DCT whilst SPIHT exploits wavelets. Note that we would like to analyze visual quality of noisy images compressed in a lossy manner for different transform-based compression methods. Third, the coder SPIHT can be considered as a freely available version of JPEG2000. Then, by comparing other coders to SPIHT it is, in fact, possible to roughly compare their performance to JPEG2000. Fourth, the considered coders use different PCC (QS and bpp). Thus, it is interesting to understand how to manage these PCCs to provide appropriate visual quality of compressed images.
Recall that the coder ADCTC  uses an optimized partition scheme that contains square or specific rectangular shape blocks. The coder AGU  uses fixed () size blocks. Both coders employ context frequency modeling of quantized DCT coefficients and deblocking at decompression stage .
Due to applying a partition scheme, ADCTC is able to adapt to compressed image content . This adaptation ability is demonstrated for the test image Barbara corrupted by noise with (Figure 3(a)). The obtained partition scheme is imposed over the image. QS is equal to 35, that is, . Larger size blocks are used for image homogeneous regions or fragments with homogeneous texture. In turn, smaller size blocks are mostly used for heterogeneous regions. The provided CR =13. Note that partition scheme depends upon QS. For larger QS and CR, less number of small size blocks appears in optimized partition scheme.
The compressed image is represented in Figure 3(b). Noise filtering effect is clearly seen especially in image homogeneous regions. This is a positive effect. Some distortions (slight smearing of sharp edges and details) are introduced. These distortions are mostly due to quantization of DCT coefficients and this is a negative effect. Joint contribution of these two effects determines quality of compressed images and it depends upon QS. However, it is difficult to derive parameters and statistics of these distortions analytically.
Performance of the considered coders can be compared in different ways. For example, it is possible to obtain Dependencies of the considered visual quality metrics on bpp. This is a good option for the coder SPIHT for which bpp (bit rate) serves as PCC. But this is not the best option for the coders AGU and ADCTC where QS serves as PCC. Thus, to compare performances of the coders ADCTC and AGU, PCC was set for them as where a factorb was chosen equal to 0.5, 1.5, 2.5, 3.5, 4.5, 5.5, and 6.5. The use of larger b leads to larger and, respectively, larger CR. Note that for the same QS and compressed image the coder ADCTC provides approximately larger CR than AGU .
It is more complicated to compare performance of the SPIHT coder to AGU and ADCTC. For this purpose, we have first carried out compression of a given noisy image by the coder AGU with the values of QS defined above. Then we have determined the values of CR provided by AGU and recalculated them to the corresponding values of bpp. After this, SPIHT coder has been applied with setting the calculated set of bpp. The obtained sets are presented in Table 1.
Consider, for example, the case . As it is seen from analysis of data in Table 1, for the same QS (or, equivalently, the same b) the image GoldHill is compressed slightly better than the image Barbara and sufficiently better than the image Baboon.
5. Simulation Results Analysis and Discussion
Since the studies of visual quality for lossy compressed noisy images are at their initial stage, we have decided to restrict ourselves by analyzing grayscale images without a necessity to consider peculiarities of HVS dealing with color perception. Three standard pixels test images have been chosen, namely, Baboon, Barbara, and Goldhill which are sufficiently different in content.
Since most typical practical values of additive noise variance are within the limits from 25 to 200 [32, 34] in our experiments we consider variances and . White Gaussian noise with the aforementioned variances was generated and added to the test images Baboon, Barbara, Goldhill providing six noisy images in total. After this, each image has been compressed by all three coders with several compression ratios.
Below we show Dependencies of PSNR, PSNR-HVS, PSNR-HVS-M, and WSNR on b at the same plot (since all these metrics are expressed in dB) for the coders AGU and ADCTC. Similarly, we present all these Dependencies on bpp for the coder SPIHT. Dependencies of MSSIM on b or bpp will be presented in Tables.
Let us start from the simplest case, the test image GoldHill. The plots are presented in Figures 4 and 5 as well as in Table 2. First, it is worth noting that for maximum value of the standard PSNR takes place. For the coders ADCTC and AGU these maxima are observed for b about 4, that is, for . Recall that in the paper  it was recommended to set for the coders with QS used as PCC. For the SPIHT, the maximum of PSNR is also observed. This happens for bpp about 0.44 that corresponds to for the coder AGU (see PSNR curve in Figure 5 and data in Table 1). These results confirm that OOPs (according to PSNR criterion) exist for different coders.
It is also seen in Figure 4(b) that, at least, one visual quality metric, PSNR-HVS, similarly to the standard PSNR, has a maximum value for the coder ADCTC although this maximum is not sharp. This maximum is observed for , that is, for slightly smaller values of b than for standard PSNR. Thus, we can state that there exist such situations (images, noise variances and, at least, one visual quality metric) that OOP for them is observed. For the coder AGU, the values of PSNR-HVS remain almost constant for , for larger b they start decreasing.
For other visual quality metrics, there are no maxima (OOP) for the image Goldhill if . All of them by increasing b have monotonically decreasing character. Besides, by comparing the corresponding Dependencies for the coders AGU and ADCTC, one can see that all characteristics of ADCTC are slightly better than for AGU for all considered range of CR. Note that maximum of the metric MSSIM also exists for SPIHT.
Analysis of Dependencies in Figure 5 shows that for the coder SPIHT all visual quality metrics do not have maxima. For , they decrease slowly. Then, for smaller bpp, reduction becomes faster.
Let us consider now the case of . The results for the ADCTC are presented in Table 2 for image Goldhill. As seen, the standard PSNR again has a maximum observed for . PSNR-HVS, PSNR-HVS-M, and WSNR monotonically reduce if b increases as in the case of (see plots in Figure 4). The metric MSSIM has maximum and, again, it is observed for . Thus, according to the metric MSSIM, OOP can also exist. For the coders AGU and SPIHT all Dependencies are monotonically decreasing.
Let us consider now the most complex test image, Baboon. The plots of quality metrics for the case are presented for the coders AGU and ADCTC in Figure 6. As it is seen, for these test image and noise variance, all considered metrics decrease if b (and, resp., CR) increases. Note that speed of reducing increases if b becomes larger. This can be explained as follows. Positive effect of noise filtering is less than negative effect of introduced distortions for this case since noise is not too intensive and image content is complex. For the coder SPIHT, the situation is the same, that is, according to all metrics, decompressed image quality becomes poorer if bpp decreases.
Table 3 presents data for the coder ADCTC for the image Baboon, and . As it can be seen, all metrics also reduce when b and CR increase. The same tendencies hold for other coders. This means that for complex images like Baboon lossy compression mainly leads to reduction of their visual quality even if they are originally noisy. This observation raises a task of analyzing content of an image before compression and setting proper parameters of a coder. This can be a topic of future research.
Let us now analyze data obtained for the test image Barbara. A part of the obtained results is given in Tables 4 and 5. According to the data in Table 4 (the coder AGU, ), the metrics PSNR and MSSIM have maxima for . Concerning the metrics PSNR-HVS, PSNR-HVS-M, and WSNR, they all monotonically decrease but for b less than 3.5 the dependence is almost flat. Similar results are observed for the coders ADCTC and SPIHT. For the coder ADCTC, the maximum of PSNR-HVS is observed for .
In turn, Table 5 presents data for the coder SPIHT, . In this case, OOPs are observed for the metrics PSNR, PSNR-HVS, and MSSIM. Interestingly, bpp for these OOPs almost coincide (all of them are observed for bpp about 0.85). Thus, since several metrics show improvement of visual quality for the same OOP it is possible to state that it really takes place.
For the coder AGU, maximum of MSSIM takes place for . Maxima of PSNR-HVS, PSNR-HVS-M, and MSSIM are observed for the coder ADCTC, all for .
Concluding analysis, it is possible to state the following. First, OOPs can be observed in cases of lossy compression of noisy images not only for the standard metric PSNR, but also for other metrics (PSNR-HVS, MSSIM) that take into account HVS. Second, these OOPs can take place for coders that are based both on wavelets (e.g., SPIHT) and DCT (e.g., AGU). Third, it is more probable to have OOP in cases of compressing simpler structure images corrupted by noise with quite large variance. In opposite cases, Dependencies of visual quality metrics are monotonically decreasing functions of CR. Fourth, most often OOPs are observed for if a coder PCC is QS or for the corresponding bpp. If one follows this recommendation for a simple image, a coder provides lossy compression in a neighborhood of OOP (in the sense of visual quality of a compressed noisy image). If a compressed image has a complex structure, then lossy compression with will lead to small degradation of compressed image visual quality (see, e.g., data for the image Baboon). The latter conclusion serves as a prerequisite for automatic procedure of noisy image lossy compression.
6. Coder Performance Comparison, Visual Examples, and Practical Recommendations
Although Figures 4 and 6 allow carrying out some comparisons of coder performance, we prefer to present more thorough analysis. Figure 7 demonstrates Dependencies of the metric PSNR-HVS for all three considered methods of compression for the image Goldhill, . As seen, ADCTC provides, in general, better visual quality (according to the metric PSNR-HVS) than other coders for the most interesting area of . Note that two benefits are provided in this case. First, PSNR-HVS is improved in comparison to the original noisy image or, equivalently, to the original image compressed in a lossless manner. Second, for the coder AGU provides CR =9.5 and the coder ADCTC has CR =10. This means that CR is about 7 times larger than for lossless compression (RAR produces CR =1.13, ZIP gives CR =1.12, RKIM has CR =1.35, JPEG-LS provides CR =1.34).
Let us give the corresponding example. Figures 8(a) and 8(b) present the noise-free and noisy image Goldhill, respectively. In turn, Figures 8(c) and 8(d) demonstrate compressed images for the coders ADCTC and AGU, respectively, for the optimal . Noise is visually seen in the image in Figure 8(b) and, certainly, it degrades image visual quality. For both coders, image filtering effect is well observed, especially in the upper part that corresponds to sky. Slightly better visual quality of the image in Figure 8(c) appears itself in sharper edges and details due to using partition scheme that allows adapting to an image content.
Let us analyze coder performance in terms of another metric. Figure 9 presents Dependencies of PSNR-HVS and WSNR on b for the test image Barbara (). According to the metric PSNR-HVS (Figure 9(a)), the coder ADCTC again outperforms other coders for the considered range of b from 2 to 5.5. ADCTC is slightly better according to the metric WSNR as well (Figure 9(b)). However, for no one of the considered coders maximum is observed.
It might seem that according to the metrics PSNR-HVS-M and WSNR, compression does not produce improvement in any case (for any image, coder, and noise variance). Taking into account previous results of our studies reported above, we decided to consider larger variance of noise for the test image Goldhill.
The obtained Dependencies of PSNR-HVS-M and WSNR on b for are presented in Figure 10. Analysis shows that PSNR-HVS-M has maximum for the coder ADCTC and, again, it is observed for (Figure 10(a)). Thus, there are practical situations when PSNR-HVS-M has OOP similarly to other visual quality metrics. On the contrary, Dependencies of WSNR on b remain monotonically decreasing (Figure 10(b)).
Consider the most complicated case—the textural test image Baboon corrupted by not very intensive noise with variance 50. Due to masking effect , noise is seen only in the central part of this image (see Figure 11(b)). If lossy compression is applied, filtering effect is observed in this part (see compressed images in Figure 11(c) for the coder ADCTC, , and in Figure 11(d) for the coder SPIHT, bpp =1.52). In other parts of the compressed image Baboon no or very small visual distortions are observed.
In aggregate, the presented results show that if it is desirable to compress images corrupted by additive noise with good visual quality, it is expedient to carry out compression with setting for coders controlled by quantization step. It is possible to perform in automatic manner if, at the first stage, noise standard deviation estimate is obtained in a blind manner. Thus, for the AGU and ADCTC, automatic compression can be done quite easily.
If a coder CR is controlled by bpp as this is usually done for SPIHT and JPEG2000, the situation is more complicated. However, there are several ways out. First, wavelet-based coders, in general, also allow setting quantization step and it should be proportional to (if known a priori) or to its estimate . Proportionality factor depends upon an algorithm that is used for wavelet coefficient normalization in a given coder. Second, it is possible to use AGU with , to determine for it, and then to set for SPIHT or JPEG2000. This possibility takes into account the fact that the coders AGU and SPIHT or JPEG2000 provide approximately the same CR. Drawbacks of this method are that one needs to have AGU at disposal and compression is to be carried out twice, first time by AGU and later by SPIHT or JPEG2000. The third way is the following. In the paper , an iterative procedure was proposed for providing a given PSNR or MSE for a compressed image with respect to original one (subject to compression). We propose to apply this procedure for providing where MSE is calculated for a compressed/decompressed image with respect to the corresponding noisy one (see equation (2)). Note that this procedure assumes carrying out multiple compression/decompression until bpp that produces is reached. The procedure should be started from rather large (about 6 for 8-bit images) and continued with where means step of bpp reduction, denotes bpp used at i th iteration.
Summarizing the obtained results and taking into account practical aspects of lossy compression, it is possible to conclude the following. In the sense of better visual quality and larger compression ratio, the coder ADCTC is preferable. However, AGU and, especially, ADCTC require more computations and, in fact, they can be considered as some extended versions of JPEG. Meanwhile, although JPEG2000 is a new standard, it is not yet widely used. Therefore, practical choice for an appropriate coder is an open question and it depends on application at hand and priority of requirements.
Analysis of the obtained results demonstrates that according to the visual quality metrics MSSIM, PSNR-HVS, and PSNR-HVS-M, maxima of these metrics can exist in the case of compressing images corrupted by additive noise. If PCC for a coder is quantization step, then a recommended choice is setting . Aforementioned maxima might (but not necessarily) take place for relatively simple images corrupted by rather intensive noise. For highly textural images corrupted by not intensive noise, Dependencies of metrics on QS are monotonically decreasing. However, even in these cases the choice is reasonable. This means that CR should be adapted to noise statistics.
The coder ADCTC provides the best visual quality of compressed images among the considered coders.
Probably, it is also desirable to adapt to global characteristics (complexity, context) of images to be compressed. For highly textural images, it might be reasonable to set slightly smaller quantization steps. However, it is not clear yet how such preclassification of images (more or less complicated) can be carried out.
In future, we plan to consider more complex models of noise and to analyze color images. According to recommendation of anonymous reviewer, we also plan to consider images that contain text and radiological images. Besides, it is worth obtaining statistics of visual quality metrics for a considerably larger number of test images to make our recommendations more practical and reliable.
Bovik A: Handbook of Image and Video Processing. Academic Press, Orlando, Fla, USA; 2000.
Furht B, Marques O: The Handbook of Video Databases: Design and Applications. CRC Press, Boca Raton, Fla, USA; 2003.
Choong MK, Logeswaran R, Bister M: Improving diagnostic quality of MR images through controlled lossy compression using SPIHT. Journal of Medical Systems 2006, 30(3):139-143. 10.1007/s10916-005-8374-4
Mielikainen J, Toivanen P, Kaarna A: Linear prediction in lossless compression of hyperspectral images. Optical Engineering 2003, 42(4):1013-1017. 10.1117/1.1557174
Gupta N, Swamy MNS, Plotkin E: Despeckling of medical ultrasound images using data and rate adaptive lossy compression. IEEE Transactions on Medical Imaging 2005, 24(6):743-754.
Kaarna A: Compression of spectral images. In Vision Systems: Segmentation and Pattern Recognition. Edited by: Ohinata G, Dutta A. I-Tech, Vienna, Austria; 2007:269-298.
Penna B, Tillo T, Magli E, Olmo G: Transform coding techniques for lossy hyperspectral data compression. IEEE Transactions on Geoscience and Remote Sensing 2007, 45(5):1408-1421.
Zeng W, Daly S, Lei S: An overview of the visual optimization tools in JPEG 2000. Signal Processing: Image Communication 2002, 17(1):85-104. 10.1016/S0923-5965(01)00029-7
Chandler DM, Masry MA, Hemami SS: Quantifying the visual quality of wavelet-compressed images based on local contrast, visual masking, and global precedence. Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers, November 2003 1393-1397.
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13(4):600-612. 10.1109/TIP.2003.819861
Egiazarian K, Astola J, Ponomarenko N, Lukin V, Battisti F, Carli M: New full-reference quality metrics based on HVS. Proceedings of the 2nd International Workshop on Video Processing and Quality Metrics, 2006, Scottsdale, Ariz, USA 1-4. CD-ROM
De Simone F, Ticca D, Dufaux F, Ansorge M, Ebrahimi T: A comparative study of color image compression standards using perceptually driven quality metrics. Applications of Digital Image Processing XXXI, 2008, San Diego, Calif, USA, Proceedings of SPIE 7073:
Ponomarenko N, Silvestri F, Egiazarian K, Astola J, Carli M, Lukin V: On between-coefficient contrast masking of DCT basis functions. Proceedings of the 3rd International Workshop on Video Processing and Quality Metrics, 2007, Scottsdale, Ariz, USA 1-4. CD-ROM
Wallace GK: The JPEG still picture compression standard. Communications of the ACM 1991, 34(4):30-44. 10.1145/103085.103089
Taubman D, Marcellin M: JPEG 2000: Image Compression Fundamentals, Standards and Practice. Kluwer, Boston, Mass, USA; 2002.
Wang Z, Simoncelli EP, Bovik AC: Multi-scale structural similarity for image quality assessment. Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers, November 2003 1398-1402.
Ponomarenko N, Battisti F, Egiazarian K, Astola J, Lukin V: Metrics performance comparison for color image database. Proceedings of the 4th International Workshop on Video Processing and Quality Metrics, 2009, Scottsdale, Ariz, USA 1-6. CD-ROM
Mitsa T, Varkur KL: Evaluation of contrast sensitivity functions for the formulation of quality measures incorporated in halftoning algorithms. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, April 1993 301-304.
Wei D, Odegard JE, Guo H, Lang M, Burrus CS: Simultaneous noise reduction and SAR image data compression using best wavelet packet basis. Proceedings of the IEEE International Conference on Image Processing, October 1995, Washington, DC, USA 200-203.
Odegard JE, Guo H, Burrus CS, Baraniuk RG: Joint compression and speckle reduction of SAR images using embedded zero-tree models. Proceedings of Workshop on Image and Multidimensional Signal Processing, March 1996, Belize City, Belize 80-81.
Starck J-L, Murtagh F, Louys M: High quality astronomical image compression. Vistas in Astronomy 1997, 41(3):439-445. 10.1016/S0083-6656(97)00049-4
White RL, Percival JW: Compression and progressive transmission of astronomical images. Advanced Technology Optical Telescopes V, 1994, Kona, Hawaii, USA, Proceedings of SPIE 2199: 703-713.
Al-Shaykh OK, Mersereau RM: Lossy compression of noisy images. IEEE Transactions on Image Processing 1998, 7(12):1641-1652. 10.1109/83.730376
Chang SG, Yu B, Vetterli M: Adaptive wavelet thresholding for image denoising and compression. IEEE Transactions on Image Processing 2000, 9(9):1532-1546. 10.1109/83.862633
Ponomarenko N, Lukin V, Zriakhov M, Egiazarian K, Astola J: Lossy compression of images with additive noise. Proceedings of the 7th International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS '05), 2005, Lecture Notes in Computer Science 3708: 381-386.
Ponomarenko N, Lukin V, Zriakhov M, Egiazarian K, Astola J: Estimation of accessible quality in noise image compression. Proceedings of European Signal Processing Conference (EUSIPCO '06), September 2006, Florence, Italy 1-4.
Chan TCL, Hsung T-C, Lun DP-K: Improved MPEG-4 still texture image coding under noisy environment. IEEE Transactions on Image Processing 2003, 12(5):500-508. 10.1109/TIP.2003.810591
Lukin VV, Ponomarenko NN, Zelensky AA, Kurekin AA, Lever K: Compression and classification of noisy multichannel remote sensing images. Image and Signal Processing for Remote Sensing XIV, September 2008, Cardiff, UK, Proceedings of SPIE 1-12.
Lukin VV, Krivenko SS, Zriakhov MS, et al.: Lossy compression of images corrupted by mixed Poisson and additive Gaussian noise. Proceedings of International Workshop on Local and Non-Local Approximation in Image Processing (LNLA '09), August 2009 33-40.
Kopolovic I, Sziranyi T: Image structure preserving of lossy compression in the sense of perceptual distortion when using anisotropic diffusion preprocessing. In Fundamental Structural Properties in Image and Pattern Analysis. , Vienna, Austria; 1999:145-154.
Ponomarenko NN, Lukin VV, Zriakhov MS, Kaarna A, Astola J: Automatic approaches to on-land/on-board filtering and lossy compression of aviris images. Poceedings of International Geoscience and Remote Sensing Symposium (IGARSS '08), July 2008, Boston, Mass, USA 3: 254-257.
Davies ER, Charles D: Color image processing: problems, progress, and perspectives. In Advances in Nonlinear Signal and Image Processing. Hindawi, New York, NY, USA; 2006:301-328.
Foi A: Pointwise shape-adaptive DCT image filtering and signal-dependent noise estimation, M.S. thesis. Tampere University of Technology, Tampere, Finland; December 2007.
Plataniotis KN, Venetsanopoulos AN: Color Image Processing and Applications. Springer, New York, NY, USA; 2000.
Said A, Pearlman WA: A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Transactions on Circuits and Systems for Video Technology 1996, 6(3):243-250. 10.1109/76.499834
Ponomarenko N, Lukin V, Egiazarian K, Astola J: DCT based high quality image compression. Proceedings of the 14th Scandinavian Conference on Image Analysis (SCIA '05), June 2005, Joensuu, Finland 1177-1185.
Ponomarenko N, Lukin V, Egiazarian K, Astola J: ADCT: a new high quality DCT based coder for lossy image compression. Proceedings of International Workshop on Local and Non-Local Approximation in Image Processing (LNLA '08), August 2008, Lausanne, Switzerland 1-6. CD-ROM
Taubman D, Marcellin M: JPEG 2000: Image Compression Fundamentals, Standards and Practice. Kluwer, Boston, Mass, USA; 2002.
Lukin VV, Abramov SK, Ponomarenko NN, Vozel B, Chehdi K: Methods for blind evaluation of noise variance in multichannel optical and radar images. Telecommunications and Radio Engineering 2006, 65(6):527-556. 10.1615/TelecomRadEng.v65.i6.40
Abramov SK, Lukin VV, Vozel B, Chehdi K, Astola JT: Segmentation-based method for blind evaluation of noise variance in images. Journal of Applied Remote Sensing 2008., 2(1):
Wang Z, Bovik AC: Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Processing Magazine 2009, 26(1):98-117.
Egiazarian K, Astola J, Helsingius M, Kuosmanen P: Adaptive denoising and lossy compression of images in transform domain. Journal of Electronic Imaging 1999, 8(3):233-245. 10.1117/1.482673
Ponomarenko N, Lukin V, Egiazarian K, Delp E: Comparison of lossy compression performance on natural color images. Proceedings of the 27th Conference on Picture Coding Symposium (PCS '09), May 2009, Chicago, Ill, USA 369.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Ponomarenko, N., Krivenko, S., Lukin, V. et al. Lossy Compression of Noisy Images Based on Visual Quality: A Comprehensive Study. EURASIP J. Adv. Signal Process. 2010, 976436 (2010). https://doi.org/10.1155/2010/976436
- Visual Quality
- Noisy Image
- Quantization Step
- Lossy Compression
- Good Visual Quality