Open Access

Efficiency analysis of color image filtering

  • Dmitriy V Fevralev1Email author,
  • Nikolay N Ponomarenko1,
  • Vladimir V Lukin1,
  • Sergey K Abramov1,
  • Karen O Egiazarian2 and
  • Jaakko T Astola2
EURASIP Journal on Advances in Signal Processing20112011:41

https://doi.org/10.1186/1687-6180-2011-41

Received: 2 June 2011

Accepted: 15 August 2011

Published: 15 August 2011

Abstract

This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.

Keywords

image filteringfilter efficiencyquality metricscolor image database

1. Introduction

A huge amount of color images is acquired nowadays by professional and consumer digital cameras, mobile phones, remote sensing systems, etc., and used for various purposes [15]. A large percentage of these images are of appropriate quality and need no processing for enhancement. However, there are quite many images which are degraded. One of the main factors affecting color image quality is the noise that might be of different types and have various characteristics. Typical sources of noise are low exposure in bad conditions of image acquisition, thermal and shot noise [2], etc. Thus, image filtering (also often called denoising) is widely used to remove undesirable noise while preserving the useful information in images. The purposes of filtering can be image enhancement (in the sense of better visual quality) and achieving better pre-conditions for image classification and compression, object detection, [69], etc.

A large number of filters have been proposed so far (see [6, 812] and references therein). Such a variety of approaches is explained by several reasons. One reason is the fact that users and customers are often unsatisfied by achieved results. This may come from the known fact that alongside the positive effect of noise suppression any filter more or less distorts useful information, such as details, edges, texture. The second reason is historical. New mathematical fundamentals for filtering have appeared steadily during the last 40 years as robust estimation theory in 70 and 80th [10, 13], wavelets, PCA and ICA in 90th of the previous century [11, 14] have been developed. Also, many new methods of locally adaptive and non-local techniques of image filtering have been designed recently (see [8, 1517] and references therein). The third reason is that more accurate and adequate models of noise have been designed and new practical situations for which the already designed filters perform poorly have been found [1821]. Next, for many applications there is a need to carry out image processing in automatic (fully blind), robust, adaptive, intelligent way, better suited for solving any final task [2225]. This is especially crucial when there is a need to process a large number of images, e.g., multichannel images and/or remote data on-board. The fifth reason is that new visual quality metrics (criteria, indices) have been developed recently to assess visual quality of data [2632]. But they are seldom used in filter design and efficiency analysis. The sixth reason is that images to be filtered can be one-channel (grayscale) [10, 13], three-channel (as color images in RGB representation) [1, 2, 6], and multichannel (e.g., multi- and hyperspectral) [3, 4]. For filtering multichannel images, two approaches are possible, namely, component-wise denoising and vector or 3D processing taking into account inter-channel correlation of image data [1, 2, 6, 12, 33, 34]. Each of them has advantages and shortcomings. A thorough analysis of results is needed for deciding which filter to apply.

In this article, we focus the four latter problems with application to color image filtering. One problem is that in most books and papers that address color image filtering, noise is supposed to be independent and identically distributed (i.i.d.) [6, 12, 33]. This is an idealization that leads to overestimation of (expected) filtering efficiency that is reachable in practice [3537]. Therefore, along considering the i.i.d. noise case, we also study the case of spatially correlated noise, which is more realistic for color images [20].

It is also worth noting that nature and statistics of noise in color images is not yet well described and modeled. Although in many references, noise is considered to be Gaussian and pure additive, this, strictly saying, does not hold in practice [1821, 38]. The noise in original (raw) images is clearly signal dependent [1, 21, 38]. After nonlinear operations with data in image processing chain [2], the assumption on noise Gaussianity and approximately constant variance of noise holds only for component image fragments with local mean intensity from about 20 till about 230...235 [18, 39]. Moreover, even for such fragments, noise variance can slightly differ for R, G, and B components where for G component it is usually the smallest. For fragments with local mean values outside these limits, noise variance is usually smaller and clipping effects can take place. This makes the analysis of filtering efficiency problematic. To simplify situation and comparisons, below we analyze additive Gaussian noise with variance values equal for all three components.

Besides, we pay main attention to visual quality of original and filtered images. Note that considerable advances have taken place in design of new visual quality metrics (indices) in recent years. It has been demonstrated many times that mean square error (MSE) is not an adequate metric for characterizing visual quality of original and processed images [2631, 4042]. Experiments with a large number of observers have demonstrated that a peak signal-to-noise-ratio (PSNR) increase by 3 dB (or, equivalently, MSE reduction twice) because of filtering does not guarantee improvement of filtered image visual quality compared to original noisy one [38]. Many quality metrics, i.e., DCTune, WSNR, SSIM, MSSIM, PSNR-HVS-M, have been designed recently and shown to be more adequate than MSE and PSNR in characterizing visual quality of original noisy and filtered images. Thus, below analysis of filtering efficiency is carried out using PSNR, PSNR-HVS-M [31], and, in some cases, MSSIM [27]. The two latter metrics are able to take into account for several valuable specific features of human vision system (HVS) and they have been demonstrated to be among the best ones for the considered application [29].

One more problem with filter design and comparison is that for many years there were no established theoretical limits of filtering efficiency. Thus, it was not clear how large gain in image quality can be provided because of filtering even in terms of output MSE lower bound. Fortunately, a breakthrough paper [43] has appeared recently. It has answered, at least, some important questions for the case of filtering grayscale images (or component-wise processing of color images) corrupted by i.i.d. noise. Below we will give more insight to this aspect.

A drawback of many publications dealing with image filtering is the use of a limited set of standard images. Meanwhile, recent research results show that "old" standard images as, e.g., Lena, are, in fact, not noise-free [44, 45]. This causes problems in correct estimation of filtering efficiency and careful comparison of filter performance. Therefore, our goal is to test filters for a larger number of real-life color images which are practically noise-free. The set of natural color images of Kodak http://r0k.us/graphics/kodak/ and the image database TID20008 [29] http://www.ponomarenko.info that is based on the Kodak set provide an opportunity of such thorough testing.

Finally, starting from the paper [46], two approaches to color image filtering began to be developed and analyzed in parallel, component-wise, and vector (3D). A lot of vector filters that allow exploiting inherent inter-channel correlation of color image components have been proposed since then [6, 12]. In this article, we basically consider DCT-based filters [15, 33, 35, 47] since they have shown themselves to be quite simple, efficient, and easily adaptable to processing grayscale and color images corrupted by i.i.d. and spatially correlated noise. We give some results for other state-of-the-art filters for comparison purposes. One more goal of testing DCT-based filters for a large number of color images and noise variances is to find practical situations for which filtering is desirable or not expedient. Note that, in our opinion, filtering of color images meant for visual inspection is not needed in two cases: (1) if noise in an original noisy image is not visible; (2) an applied filter does not improve visual quality of processed (output) image compared to the corresponding original one.

The rest of this article is structured as follows. First, we give a brief description of TID2008 and the possibilities offered by it in Section 2. Then, potential limits of filtering efficiency and the results provided by known filters are considered in Section 3. Thorough efficiency analysis for additive white (i.i.d.) Gaussian noise (AWGN) and spatially correlated noise is carried out in Sections 4 and 5, respectively. Finally, the conclusions are drawn.

2. Tampere image database 2008 (TID) and used noise models

The color image database TID2008 was created in 2008. The main goal of its creation was to provide wider opportunities for performance analysis of different visual quality metrics and their comparison to other databases of distorted images as, e.g., LIVE [48] that contains images with five types of degradations. The database TID2008 contains 25 distortion-free test color images (see Figure 1) and 1,700 distorted ones. Seventeen types of distortions have been simulated including AWGN (the first type of distortions), spatially correlated noise (the third type of distortions), and other ones, in particular, distortions in filtered images because of residual noise and imperfection of filters. Four levels of distortions are provided adjusted so that PSNR values are about 30, 27, 24, and 21 dB for each color image (see [29, 40] for more details).
Figure 1

Noise-free test color images of TID2008 (each image has 384 rows and 512 columns, 24 bits per pixel).

Nowadays people use the TID2008 for some other purposes than it was originally created [49, 50]. Since this image database already contains noisy and reference images, it can be also exploited for testing image filtering efficiency. Moreover, having noise-free images at disposal, it is easy to add noise to them with any required variance and, in general, any type and statistical characteristics. Note that all the images, in opposite to original Kodak database, are of equal size that provides additional benefits in their processing and analysis. The images are of different content and complexities (complexity here means a percentage of pixels that belong to image homogeneous regions). In this sense, the images ##3 and 23 (Figure 1) are the simplest whilst the images ##13, 14, 5, 18 are the most complex ones. The image #25 is not from Kodak database. It was synthesized by the authors of TID2008 to test metric performance for artificial images. In general, no obvious differences between metric performance for real life and artificial images have been observed in experiments.

The PSNR values equal to 30, 27, 24, and 21 dB mentioned above are provided for AWGN and spatially correlated noise by setting variance values σ2 equal to 65, 130, 260, and 520, respectively, for images with 8-bit representation in each color (R, G, and B) component. Noise independence in color components has been assumed. Spatially correlated noise has been obtained by filtering 2D AWGN by 3 × 3 mean filter with further setting a required noise variance. After adding noise, noisy image values have been returned into the limits 0...255, i.e., clipping effects are observed in noisy images.

The case of noise variance equal to 65 (distortion level 1) is the most interesting from practical point of view since the noise is clearly visible for most images and, thus, it is desirable to apply filtering. The same relates to noisy images with noise variance σ2 = 130. Meanwhile, noise variances 260 and 520 seldom met in practice. Thus, let us concentrate on more thorough studying the cases of noisy images with σ2 = 130, 65, and less. For all the values of noise variance smaller than 65, images corrupted by i.i.d. and spatially correlated noise have been obtained similarly as for TID2008 images.

Let us illustrate some effects observed for noisy images. Figure 2a shows the test image #16 corrupted by i.i.d. noise with variance 65. Noise is visible in homogeneous image regions but masked in textural regions. The same test image corrupted by spatially correlated noise with the same variance is presented in Figure 2b. It is obvious that the visual quality of the latter image is worse. Noise is well seen in practically all parts of this image. For both images, the values of input PSNR defined as PSNRinp = 10 log10(25522) are equal to 30 dB. For the image in Figure 2a, the metric PSNR-HVS-M [31] equals to 33.2 dB whilst for the image in Figure 2b PSNR-HVS-M = 26.6 dB (larger PSNR-HVS-M relates to better visual quality). Thus, also from example one can see that PSNR-HVS-M characterizes image visual quality more adequately than conventional PSNR.
Figure 2

The test image #16 corrupted by i.i.d. (a) and spatially correlated (b) noise with σ 2 = 65.

The reason is that the metric PSNR-HVS-M accounts for two important features of HVS. First, it exploits the fact that sensitivity to distortions in low spatial frequencies is larger than to distortions in high spatial frequencies. Second, masking effect (worse ability of human vision to notice distortions in heterogeneous and textural image areas) is taken into account.

3. Potential limits and preliminary analysis of filter efficiency

As it has been mentioned in Section 1, there is a possibility to derive lower bound output MSE (further denoted as MSElb) for denoising a grayscale image corrupted by i.i.d. noise [43] under condition that one also has the corresponding noise-free image. This allows determining MSElb for component-wise processing of color images in TID2008 for the first type of distortion (i.i.d. Gaussian noise). The obtained results for 11 color images from TID2008 (σ2 = 65) are presented in Table 1. We have selected for analysis the most textural images (##13, 14, 5, 6, 8, 18), the simplest structure test images (##10 and 23), one example of typical (middle complexity) images (#11), and the artificial image #25.
Table 1

Lower bound MSE and output MSE for the DCT-based filter for components of color images in TID2008.

Image index in TID2008

R component

G component

B component

 

MSE lb

MSE DCT

MSE lb

MSE DCT

MSE lb

MSE DCT

1

28.9

36.8

29.9

36.5

28.9

36.8

5

27.5

30.4

28.0

30.9

26.4

30.8

6

23.7

31.2

23.5

31.2

22.4

30.8

8

23.2

32.5

23.3

32.1

22.6

32.0

10

7.1

17.6

7.5

18.1

7.3

17.8

11

16.4

26.4

16.2

26.3

15.3

25.8

13

41.0

46.6

42.6

46.2

37.2

46.3

14

20.7

31.0

20.0

30.6

19.5

30.7

18

21.4

28.7

20.4

28.1

17.4

28.3

23

4.6

12.9

4.7

12.7

4.8

12.6

25

14.7

19.4

14.0

18.2

13.5

18.3

The analysis shows the following:
  1. 1.

    For a given image, the values MSElb are close to each other, this is explained by known high similarity (inter-component correlation) of information content in component R, G, and B images;

     
  2. 2.

    The values MSElb can differ by up to 10 times depending upon image complexity (compare MSElb values for the test images ##13 and 23); this conclusion is in good agreement with data presented in the article [43] where it has been shown that the difference can be even larger;

     
  3. 3.

    The larger MSElb values are observed for more complex-structure images, for the image #13 MSElb is only about 1.55 times smaller than σ2 = 65; thus, even potential quality improvement because of filtering in terms of output MSE or output PSNR (PSNRout) is quite small;

     
  4. 4.

    Meanwhile, for other images improvement of PSNR (that can be characterized by PSNRout-PSNRinp) can be considerable, up to 11 dB for the test image #23.

     

For our further study, it is important to recall some conclusions resulting from the previous analysis [40]. To provide better visual quality of a filtered image compared to the corresponding noisy one, it is necessary to ensure that PSNR improvement because of filtering is, at least, 3...6 dB (the smaller PSNR for noisy image, the larger PSNR improvement should be). The latter conclusion is based on the analysis of averaged mean opinion score (MOS) [40] but it can be different for particular images.

So, let us briefly look at output MSE values (MSEout) provided by some recently proposed filters applied component-wise. Consider first a standard DCT-based filter with 8 × 8 fully overlapping blocks and hard thresholding with the threshold T = 2.6σ [47] where σ is supposed to be known a priori. The obtained output MSEs denoted as MSEDCT are presented in Table 1. Their analysis allows us to draw the following preliminary conclusions:
  1. 1.

    There is an obvious correlation between MSElb and MSEDCT: to larger MSElb corresponds the larger MSEDCT;

     
  2. 2.

    For larger MSElb, the ratio MSEDCT/MSElb is smaller, i.e., the standard DCT filter provides efficiency close to the potential limit; the same tendency has been observed in [43] where it has been demonstrated that the state-of-the-art filters possess efficiency close to the reachable maximum for complex-structure images especially if noise variance is large; for such situations there is a very limited room for further improvement of filter performance;

     
  3. 3.

    Considerable room for further improvement of filter performance exists for the simplest-structure images (e.g., ##23 and 10, but MSEDCT for them is already quite small; thus, further improvement of filter performance is not so crucial);

     
  4. 4.

    The results for artificial image #25 are similar to those ones for typical real-life images as the image #11.

     

One can argue that the standard DCT-based filter is not the best. Because of this, for comparison purposes we also present some results [51] for a more elaborated filter BM3D [16] shown to be the best in [43]. For σ2 = 65 and R component of color images, the BM3D filter produces MSEs equal to 27.8, 28.3, 45.0, 29.4, 27.5, and 11.5 for the images ##5, 8, 13, 14, 18, and 23, respectively. Comparison of these data to the corresponding data in Table 1 shows that the BM3D is slightly more efficient than the DCT-based filter and it produces closer output MSEs to MSElb. However, the difference is not significant. It is noted that thorough comparison of different filters is not the main goal of this article. Here, it is important that DCT-based filters perform close to currently reachable limit.

We have also determined MSElb for the cases σ2 = 130 and 260. For a given test image, MSElb for the case σ2 = 130 is about 1.7...1.8 times larger than MSElb for σ2 = 65. Similarly, MSElb values for σ2 = 260 are about 1.7...1.8 times larger than the corresponding MSElb values for σ2 = 130. The same tendency has been observed [43] for grayscale test images. The ratios MSEDCT/MSElb for larger noise variances are even smaller than for σ2 = 65. Thus, let us mainly concentrate on considering σ2 = 65 and smaller values as more realistic and interesting in practice. An interested reader can find some additional data for σ2 = 130 in [51, 52].

Unfortunately, the method and software [43] do not allow determining potential limits of filtering efficiency for vector filtering of color images. However, there are initial results showing that MSElb values in this case should be considerably smaller than in the case of component-wise processing [51, 53]. We present results (output MSE3DDCT) for the 3D DCT based vector filter [33] that uses "spectral" DCT to decorrelate color components and then applies 2D DCT (see data in Table 2 for noise variances σ2 = 65 and σ2 = 130). Let us, for example, consider MSE3DDCT for the test image #13. They are again quite close for R, G, and B components and are approximately equal to 23 for σ2 = 65. This is almost twice less than MSElb for component-wise processing case (see data in Table 1). The values MSE3DDCT occur to be smaller than the corresponding MSElb for the test images ##5 and 8 as well (compare data in Tables 1 and 2). Only for the simplest test image #23 the values MSE3DDCT are larger than the corresponding MSElb values although all MSE3DDCT are sufficiently smaller than the corresponding MSEDCT.
Table 2

Output MSE for the 3D DCT based filter [33] for four color images in TID2008.

Component

Image

 

Test image #13

Test image #5

Test image #8

Test image #23

 

σ 2 = 65

σ 2 = 130

σ 2 = 65

σ 2 = 130

σ 2 = 65

σ 2 = 130

σ 2 = 65

σ 2 = 130

R

22.0

38.9

17.3

29.1

18.2

33.2

9.0

13.8

G

22.6

40.8

17.3

29.4

17.5

33.4

8.8

13.7

B

24.6

42.1

17.5

29.2

18.8

31.3

9.5

14.6

Some other vector (3D) filters as C-BM3D [53] are able to produce even smaller output MSE than MSE3DDCT [54]. Besides, as it has recently been demonstrated in [54], lower bounds for vector filtering is about twice smaller than the corresponding MSElb values if noise is independent in color components.

One should not be surprised by the fact that MSE3DDCT and output MSE for some other vector filters can be smaller than the corresponding MSElb. This does not mean that MSElb values derived according to [43] are incorrect. This only demonstrates two things. First, the use of inter-component correlation being taken into account by a filter allows considerable improvement of filtering efficiency. Second, it is worth trying to derive lower bound MSE for multichannel filtering in the future.

Table 2 also presents the results for σ2 = 130. It is seen that for a given image and color component, the values of MSE3DDCT for σ2 = 130 are about 1.5...1.7 times larger than for σ2 = 65. Thus, the tendency described above remains.

One should also keep in mind that nowadays there are quite many blind (automatic) methods for estimation of noise variance needed to set filter's parameter (threshold), see, e.g., [5557] and references therein. For i.i.d. additive noise case, these methods allow estimating noise variance or standard deviation accurately enough even for highly textural images as, e.g., the test image #13. These methods can be applied if noise variance in color components is not known in advance creating the basis for fully automatic processing [58]. If noise in color images has specific properties described in Section 1 and the articles [18, 39], we recommend using in blind estimation of noise variance only the image fragments (blocks, scanning windows) with local mean from 25 till 230.

4. Filter efficiency analysis for the TID2008 color images, AWGN case

Let us start from brief description of the used quantitative criteria of filtering efficiency. The filter output MSEs for color image components are calculated as
(1)

where is ij th sample of filtered k th component of a color image in RGB representation, denotes true (noise-free) value of ij th pixel of k th component, k = 1,2,3; I, J define a processed image size (384 rows and 512 columns for TID2008 color images).

Output PSNR for the considered 8-bit representation of each color component is determined as
(2)
Alongside with the standard PSNR, we have analyzed the visual quality metric PSNR-HVS-M. For calculating PSNR-HVS-M, weighted MSE is derived first (see details in [31]), and then
(3)

The source code is available at http://www.ponomarenko.info/psnrhvsm.htm. Similar to PSNR, PSNR-HVS-M is expressed in dB. Larger values correspond to better visual quality. The cases PSNR-HVS-M > 40 dB relate to almost perfect visual quality where noise and distortions are practically not seen [59]. Also note that dynamic range D of image representation should be used in (2) and (3) instead of 255 if images are not in 8-bit representation.

Let us first consider the dependences PSNR-HVS-M k (n), where n denotes index in TID2008, before and after filtering for σ2 = 65 (see Figure 3a).
Figure 3

PSNR-HVS-M k ( n ) before (thin lines) and after filtering for σ 2 = 65 (a) and 25 (b), AWGN.

The lower group of three curves corresponds to input (noisy) images and the upper group to the filtered ones, respectively. There are several important observations that follow from the analysis of these curves:
  1. 1.

    Again, the curves for all color components are very similar; this relates to both the group of input (noisy) images and output (filtered) ones.

     
  2. 2.

    For original (noisy) images, the lowest visual quality takes place for the simplest structure images (the smallest values of PSNR-HVS-M k (n) are observed for the test images ##2, 3, 4, 15, 16, 20, and 23, about 33 dB for all of them); this deals with the fact that for textural images noise is considerably masked while for simple structure images it is well seen in homogeneous image regions.

     
  3. 3.

    For all the test images, their visual quality has been improved; however, improvement is quite different, the largest improvement is observed for simple structure images as, e.g., the test images ##3, 15, 23; the smallest improvement takes place for the most complex structure test images as, e.g., the test images ##5, 13, and 14.

     

It is noted that different efficiencies of image filtering result from the test image properties. For example, the test image #13 is, obviously, more complex than the test images #3 and #23. The problem of efficient filtering of textural images is typical and crucial not only for DCT-based filters but also for almost all the filters as well. Generally speaking, this is one of the most complicated problems in image filtering (see also data in Table 1).

Here, we would like to draw readers' attention to recently obtained results [59]. Visibility of distortions has been analyzed for images compressed in a lossy manner. It has been shown that for PSNR-HVS-M > 40 dB or MSSIM > 0.99 the distortions are practically non-noticeable. We have checked this for color noisy and filtered images as well as for images with watermarks. It has been established that the aforementioned property holds.

Keeping this in mind, it is possible to state that for AWGN with σ2 = 65 noise is clearly visible in original images (the values of PSNR-HVS-M k (n) are within the limits 33...36 dB, see the lower group of curves in Figure 3). In processed images, residual noise and distortions introduced by filtering are less noticeable but anyway visible.

We have also considered several values of AWGN noise variance smaller than 65 (the corresponding noisy images have been generated using the reference images in TID2008). Consider the most interesting case of σ2 = 25. It is noted that for σ2 = 25 PSNRinp is equal to 34.1 dB for all noisy images. The results are presented in Figure 3b. The lower group of three curves relates to the noisy images and the upper group corresponds to the filtered ones. The main conclusions drawn from the analysis of these curves are the same as conclusions 1-3 given above. The difference consists in the following. The smallest values of PSNR-HVS-M k (n) observed for the noisy test images ##2, 3, 4, 15, 16, 20, and 23 are within the limits 37.5...38 dB, i.e., considerably larger than for the case of σ2 = 65. For the most complex structure images as, e.g., the test images ##5, 8, 13, and 14, the values of PSNR-HVS-M k (n) are larger than 40 dB even for noisy (not filtered) images and there is no need to process them to improve visual quality.

For almost all the filtered test images, the values of PSNR-HVS-M k (n) are larger than 40 dB. This means that processed images are practically indistinguishable from the corresponding reference ones. Moreover, if more sophisticated filtering methods than component-wise DCT-based denoising are applied, then it is possible to provide almost "ideal" visual quality of processed images (PSNR-HVS-M k > 40 dB) for values of noise variance larger than 25. As examples, let us give data for two images from TID2008: one of the simplest ones (#3) and one of the most complex (#13). If the 3D DCT filter [33] is applied to the test image #3 corrupted by AWGN with σ2 = 35, the values of PSNR-HVS-M k are equal to 41.84, 41.94, and 41.47 dB for R, G, and B components, respectively. Similarly, for the image #13 we have 42.66, 42.00, and 41.1 dB (all over 40 dB). Thus, the upper limit of AWGN variance for which filtered images are indistinguishable from reference ones is even higher if efficient 3D filters are employed.

Our studies have also shown that if σ2 ≤ 10...15, noise is practically (with large probability) invisible in original images. This means that there is no reason to apply filtering if AWGN noise has variance σ2 ≤ 10...15.

In terms of conventional PSNR k , the smallest values for σ2 = 25 are observed for the components of the complex-structure test image #13 (about 35 dB) while for the simplest test images (##3, 7, 20, 23, and 25) the values of PSNR k reach 40 dB. Therefore, in terms of PSNR k , component-wise DCT-based filtering is still efficient. More complicated filters [33, 53] are able to provide even larger increase of PSNR after denoising.

A practical question is then can anyone predict efficiency of filtering or is it reasonable to perform filtering for a given image? For this purpose, one has to be sure that noise is i.i.d. Second, one has to be confident that noise variance is smaller than 15 (then no filtering can be performed), if component-wise DCT-based filtering is to be applied and smaller than 35 if 3D DCT-based denoising has to be carried out. Earlier, we mentioned the methods for blind evaluation of noise variance which are accurate enough. Thus, it could be also nice to have a parameter allowing to establish is noise i.i.d. or not.

One such parameter has been proposed in [36]. The methodology of its determination is the following. For each block with its left upper corner characterized by indices l and m, two local estimates of noise variance are calculated in spatial domain as
(4)
and in DCT domain as
(5)

where are DCT coefficients of lm th block of k th component of a given color image. Then, for each block the following ratio is calculated . The histogram of these ratios is formed and its mode is determined by the method given in [60]. The distribution of R klm for all k and almost all images has quasi-Gaussian component with a maximum coordinate close to unity (for i.i.d. noise) and a right-hand heavy tail where the ratios relating to this tail are obtained in heterogeneous image blocks.

Let us analyze the behavior of the estimates . The dependences of on n for all color components are given in Figure 4 as the curves of the corresponding color (for σ2 = 65 and 25). As seen, these dependences are very similar. Almost equal values of are observed for R, G, and B components of a given test color image and a fixed noise variance. Some sufficient differences in the values are only seen for the test image #20. The reason is in considerable clipping effects observed for this test image. The values for larger noise variance are slightly smaller (compare these values for the same images in Figure 4a, b).
Figure 4

Dependences for components of color images for σ 2 = 65 (a) and 25 (b).

The most important observation is that the largest values take place for the most textural images as the test images ##5, 8, 13, 18. For other test images, the values are quite close to unity. Thus, the parameter seems to be "correlated" with image complexity and filtering efficiency. To check this assumption, let us determine Spearman rank correlation factor [61] (note that here rank correlation is used to avoid fitting problems). First, we have calculated Spearman rank correlation Rk Spfor data arrays (Figure 4a) and PSNR k (n) at filter outputs, n = 1,...,25. For all the color components, the values Rk Spare in the range -0.9...-0.8. The fact that the values of Rk Spare negative means that reduction of relates to an increase of PSNR k (n). The fact that absolute values of Rk Spare quite large (close to unity) shows that there exists considerable and strict correlation between and PSNR k (n).

We have also calculated Rk Spfor data arrays (Figure 4b) and PSNR k (n) at filter outputs, n = 1,...25 for noise variance equal to 25. The values Rk Spfall to the same range. Thus, larger increase of PSNR can, most probably, be provided if is small.

Besides, if noise is i.i.d., then a considerable deviation of from 1.0 (e.g., is larger than 1.08) shows that an image to be filtered is quite complex (is textural and/or contains many fine details). In turn, it also means that for this image it is difficult to expect efficient filtering in the sense of considerable increase of PSNR-HVS-M.

Sufficient correlation also exists between (Figure 4) and PSNR-HVS-M k (n) before filtering (lower groups of curves in Figure 3). The Spearman rank correlation factors for these arrays Rk Spare within the limits 0.8...0.9. Positive values mean that if is rather small, then the corresponding PSNR-HVS-M k (n) is rather small too. Then, noise in a given image is not considerably masked. Therefore, the parameter that can be determined for an image in advance (before filtering) can serve for characterizing image complexity and noise masking effects as well as predicting efficiency of filtering. Further analysis results and conclusions are presented in the following section.

Here, we would like to give more insights on visual quality of noisy and filtered images. For this purpose, let us recall how the metric PSNR-HVS-M (3) is calculated [31]. The first step is to determine σ2HVS-M. This parameter is an average of local MSEs σ2HVS-M lm: . Local MSEs σ2HVS-M lmare calculated in 8 × 8 blocks with left upper corner defined by indices l and m and they are determined in DCT domain with taking into account contrast sensitivity function and masking [31]. Local MSEs σ2HVS-M lmcan be smaller or larger than noise variance. The inequality σ2HVS-M lm> σ2 usually holds if noise is spatially correlated (or realization of i.i.d. noise in a given block exhibits such quasi-correlation) and/or there is no masking for a given block (this mostly happens for homogeneous image blocks).

Consider as one example the G component of the test image #14 corrupted by i.i.d. noise with variance 25 (shown in Figure 5a). Noise can be hardly noticed in homogeneous image regions as the gum boat surface. In other places, as water surface noise is practically not seen because of masking effects. These observations are confirmed by the map of σ2HVS-M lmfor noisy (original) image presented in Figure 5b (further denoted as σ2HVS-M or lm, brighter pixels correspond to blocks with larger σ2HVS-M or lm). The histogram of σ2HVS-M or lmis shown in Figure 6a. It is seen that there are values of σ2HVS-M or lmlarger than 25 but this happens quite seldom and mostly in homogeneous image regions (analyze the noisy image in Figure 5a and the map of σ2HVS-M or lmin Figure 5b jointly).
Figure 5

Green component of noisy image # 14 (a), the map of σ 2 HVS-M lm for noisy image (b), the map of σ 2 HVS-M lm for filtered image (c), the ratio map in binary form, black if σ 2 HVS- M fi lm/ σ 2 HVS- M or lm< 1.5 and white otherwise (d).

Figure 6

Histograms of σ 2 HVS- M or lmfor noisy image (a), σ 2 HVS- M fi lmfor filtered image (b), and σ 2 HVS- M fi lm/ σ 2 HVS- M or lm(c).

Consider now the estimates σ2HVS-M or lmfor the image processed by the DCT-based filter (further denoted as σ2HVS-M fi lm). The corresponding map is presented in Figure 5c (brighter pixels correspond to blocks with larger σ2HVS-M fi lm) and the histogram is given in Figure 6b. Analysis of the histogram shows that, on the average, the values of σ2HVS-M fi lmare smaller than σ2HVS-M or lmalthough there are σ2HVS-M fi lmlarger than 25. This takes place in textural regions and in edge/detail neighborhoods (analyze the noisy image in Figure 5a and the map of σ2HVS-M fi lmin Figure 5c jointly).

Finally, we have obtained the map of the ratio σ2HVS-M fi lm/σ2HVS-M or lm(presented in binary form in Figure 5d) and the histogram of this ratio (see Figure 6c). Histogram analysis demonstrates that mostly the ratios are smaller than unity, i.e., local improvement of visual quality is provided by filtering. This mostly occurs in homogeneous image regions. However, there are also local degradations of visual quality when distortions introduced because of filtering are larger than positive effect of noise removal. The places where such degradations are the most considerable are shown by white in the binary map in Figure 5d. Joint analysis of the noisy image in Figure 5a and the binary map in Figure 5d allows concluding that the largest local degradations of visual quality take place in heterogeneous image regions (edge/detail neighborhoods, high contrast textures).

Let us see what improvement of image visual quality can be produced by the BM3D filter [16] applied component-wise. Figure 7a shows the histogram of the ratio σ2HVS-M fi lm2HVS-M or lmfor this filter. It is similar to the histogram in Figure 6c. Figure 7b demonstrates the ratio binary map. Its comparison to the binary map in Figure 5d shows that the number of white pixels (for which σ2HVS-M fi lm/σ2HVS-M or lm> 1.5 and, thus, BM3D filtering introduces considerable distortions) is smaller than for the conventional DCT-based filter. Note that the values of PSNR-HVS-M k (14) for the BM3D filter are only by 0.3...0.4 dB larger than for the DCT-based filter. Thus, visual quality improvement is not large while complexity of BM3D is sufficiently greater.
Figure 7

The histogram of the ratio σ 2 HVS- M fi lm/ σ 2 HVS- M or lm(a) and its binary map (b) for the BM3D filter.

Consider now another, simpler structure, test image #3 corrupted by AWGN with larger noise variance equal to 65. It's noisy green component is represented in Figure 8a. The noise is well seen, especially in homogeneous image regions. The map of σ2HVS-M or lmfor this noisy image is shown in Figure 8b (brighter pixels correspond to blocks with larger σ2HVS-M or lm). Masking effect is well observed for edge neighborhoods (dark pixels that appear on them show that the values of σ2HVS-M or lmare considerably smaller than in other places). The corresponding histogram of σ2HVS-M or lmis given in Figure 9a. It is seen that there are values of σ2HVS-M or lmlarger than 65 although this happens very seldom, mostly in homogeneous image regions (analyze the noisy image in Figure 8a and the map in Figure 8b jointly).
Figure 8

Green component of noisy image # 14 (a), the map of σ 2 HVS-M lm for noisy image (b), the map of σ 2 HVS-M lm for filtered image (c), the ratio map in binary form, black if σ 2 HVS-M fi lm / σ 2 HVS-M or lm < 1.5 and white otherwise (d).

Figure 9

Histograms of σ 2 HVS-M or lm for noisy image (a), σ 2 HVS-M fi lm for filtered image (b), and σ 2 HVS-M fi lm / σ 2 HVS-M or lm (c).

The estimates σ2HVS-M fi lmfor the image processed by the DCT-based filter component-wise are presented in Figure 8c, the corresponding histogram is shown in Figure 9b. The histogram analysis clearly demonstrates that, on the average, the values of σ2HVS-M fi lmhave become considerably smaller than σ2HVS-M or lmalthough there is still a small percentage of σ2HVS-M fi lmlarger than 65. Such blocks occur in neighborhoods of high contrast edges and fine details (from the joint analysis of the noisy image in Figure 8a and the map in Figure 8c). The obtained map of the ratio σ2HVS-M fi lm/σ2HVS-M or lmis presented in binary form in Figure 8d, the histogram of this ratio is given as well (Figure 9c). The analysis of the histogram shows that for a large percentage of pixels (blocks) the ratios are smaller than unity. This means that a local improvement of visual quality is provided due to the filtering. This improvement is sufficient since there are many values of the ratio smaller than 0.5. As it can be expected, sufficient improvement of local visual quality takes place mainly in homogeneous image regions.

However, even for the considered simple structure test image, there exist local degradations of visual quality as well. The map in Figure 8d shows that distortions introduced because of filtering are large for sharp edge and detail neighborhoods (shown as white in the binary map in Figure 8d). This conclusion once more stresses the known drawback of the DCT-based filter to introduce distortions in places of sharp transitions in images [25, 62]. It means that if visual quality of filtered images is of prime importance, the attention in filter design should be paid to the preservation of edges and fine details in the first order. In this sense, non-local filtering methods are able to provide certain benefits w.r.t. DCT-based filtering. It is noted that attention of observers to edge/detail neighborhoods has been stated in several articles and used in design of visual quality metrics [28, 30, 63].

5. Filter efficiency analysis: spatially correlated noise case

Having studied the i.i.d. noise case, let us consider now filtering of images corrupted by spatially correlated noise. First, there are less filters designed and tested for spatially correlated noise removal (see [35, 36, 6467] and references therein). Second, under the assumption of spatially correlated noise presented in images, several questions arise immediately: what is a model for spatially correlation noise, what a priori information on characteristics (statistical, spatial spectrum) is available, are 2D spatial spectrum or correlation function the same for all parts of a processed image (does an assumption on stationarity of noise or invariance of its characteristics hold)?

These are questions of additional studies and answers to them depend on practical application at hand. To simplify the situation, let us assume that spatially correlated noise is pure additive and stationary in the sense that its 2D spatial spectrum is the same for all parts of images to be processed. However, even in this case there is a wide variety of possible variants of 2D spectra of spatially correlated noise. To partly alleviate this uncertainty, we, first, assume that noise spectrum (in DCT domain for 8 × 8 blocks) is known in advance or estimated in a blind manner with appropriate accuracy [64]. Second, in our simulations we consider two models of spatially correlated noise that differ from each other by 2D spatial DCT spectra in terms of their shape or, equivalently, main lobe width of 2D spatial autocorrelation function (ACF). In other words, we carry out brief analysis how the main properties of spatially correlated noise ACF influence original image visual quality and efficiency of DCT-based filtering.

For spatially correlated noise case, thresholds for the corresponding modification of the DCT-based filter (MDCT-based filter) should be frequency dependent as where Wnorm (n, m) is the normalized DCT spectrum for the block size used, n and m are frequency indices. It is noted that the DC coefficients in blocks are not thresholded (changed) while carrying out filtering [35, 37].

It is noted that in the case of additive spatially correlated noise, its spatial correlation should be taken into account. The standard DCT-based filter with fixed (frequency independent) threshold is not efficient for removal of spatially correlated noise. This has clearly been demonstrated in terms of output PSNR [35, 37] and in terms of the metric PSNR-HVS-M [35, 52]. The difference in output PSNR for the standard and MDCT filters can be from 1 to 4 dB [35, 37, 52]. The increase is the smallest for complex-structure images as the test image #13 (about 1.5 dB) but it can reach 4 dB for simple-structure images as the test image #3 for noise variance equal to 65. However, the provided values of PSNR k (n) for the MDCT-based filter are smaller than for i.i.d. noise case for each of the analyzed test images [62]. This shows that removal of spatially correlated noise is a more difficult task than AWGN filtering.

The difference of the metric PSNR-HVS-M values for the considered standard (with frequency independent threshold) and MDCT filters is sufficient (up to 3 dB). Dependences PSNR-HVS-M k (n) for components of color images before filtering (the lower group if three curves) and after image denoising by the standard DCT-based filter (the upper group of three curves, σ2 = 65) are presented in Figure 10a. It is seen that the presence of spatially correlated noise leads to a larger degradation of image visual quality than that with the presence of AWGN under condition that noise variance is the same (compare the lower groups of curves in Figures 10a and 3a). As can be seen also, the standard DCT-based filtering produces improvement of visual quality of processed images for all the test images. The increase of PSNR-HVS-M after filtering ranges from 0.7 dB for the most textural images to 2.5 dB for the simplest test images.
Figure 10

Dependences PSNR-HVS-M k ( n ), dB for components of color images (spatially correlated noise, σ 2 = 65) before (thin lines) and after filtering by the standard DCT-based filter (a) and by its modification with frequency dependent thresholds (b).

Dependences PSNR-HVS-M k (n) for the output of the MDCT filter are represented in Figure 10b. Comparison to the upper group of curves in Figure 10a demonstrates that the MDCT filter produces considerably larger improvement of output image visual quality. However, the worst results (the smallest values of PSNR-HVS-M k (n) are observed for the most textural (complex-structure) test images ##1, 5, 13, 14, and 18.

Thus, it is reasonable for applying the MDCT-based filtering with frequency-dependent thresholding adapted to DCT spectrum of spatially correlated noise. Because of this, below we consider only this filter. The main attention is paid to the values of noise variance smaller than 65, since, as it is seen for plots in Figure 10, noise is clearly seen in original and filtered images for the case of σ2 = 65 (the values of PSNR-HVS-M are sufficiently smaller than 40 dB).

As it was mentioned above, it is worth analyzing spatially correlated noise with different spatial spectra. Images in TID2008 are corrupted by spatially correlated noise obtained by applying the 3 × 3 mean filter to 2D AWGN. In this section, we continue to simulate spatially correlated noise in this manner but with variance smaller than 65 (this case is treated as considerable correlation noise--CCN). Besides, we simulate middle correlation noise (MCN) by applying to 2D AWGN the linear 3 × 3 scanning window FIR filter with weights

Such filter is characterized by wider spectrum and, respectively, narrower main lobe of 2D ACF. By simulating large size 2D arrays of pure spatially correlated noise in this way, it is possible to estimate Wnorm (n, m) used in MDCT-based filter quite accurately. The 8 × 8 matrices of (Wnorm (n, m))0.5 for both variants of spatially correlated noise are given below. It is seen that the values of (Wnorm (n, m))0.5 for low frequencies (upper left corner) and high frequencies (lower right corner) differ a lot, especially for CCN case. Matrix elements symmetrical with respect to the main diagonal are practically equal to each other. Difference of their values is explained by a finite size of 2D arrays of simulated noise used for spatial spectrum estimation.

Since spatially correlated noise with σ2 = 65 is clearly visible in original and filtered images, let us consider sufficiently smaller values of noise variance. Figure 11 shows the dependences of PSNR-HVS-M k (n), dB for components of color images for noise variance equal to nine for CCN and MCN. As previously described, lower groups of curves correspond to the original (noisy) images while the upper groups relate to the filtered images.
Figure 11

Dependences PSNR-HVS-M k ( n ), dB for components of color images ( σ 2 = 9) before (thin lines) and after filtering by the MDCT-based filter for CCN (a) and MCN (b).

The curves' behavior is very similar. The only difference is that, for a given test image, its visual quality is slightly better for MCN case than for CCN both before and after filtering. As it can be expected, visual quality improves because of filtering although this improvement is small for complex-structure images and it can reach up to 4 dB for simple-structure images. For both CCN and MCN, noise is visible in all original the test images since PSNR-HVS-M k (n) < 40 dB. However, even after filtering residual noise and introduced distortions remain visible in most output images. This means that the task of image filtering with the purpose of their enhancement (visual quality improvement) is crucial for spatially correlated noise case even for rather small values of noise variance.

Consider now smaller values of noise variance, equal to 6. The obtained dependences are represented in Figure 12. Analysis of these dependences shows the following. Spatially correlated noise is still visible in most original noisy images, especially simple structure ones (visual inspection has shown that noise can be visible in homogeneous regions of such images). However, according to the rule PSNR-HVS-M k (n) > 40 dB, residual noise and introduced distortions become practically invisible after filtering.
Figure 12

Dependences PSNR-HVS-M k ( n ), dB for components of color images ( σ 2 = 6) before (thin lines) and after filtering by the MDCT-based filter for CCN (a) and MCN (b).

One can argue that we use DCT-based filters and DCT-based visual quality metric PSNR-HVS-M in our analysis and it is not fair enough. To avoid such possible criticism, we have also analyzed the wavelet-based metric MSSIM [27]. Recall that this metric is within the limits from 0 (very bad quality) to 1 (perfect quality). If MSSIM is larger than 0.985...0.99, distortions are practically not seen.

The dependences of MSSIM k (n) for MCN with σ2 = 9 are shown in Figure 13a. Their analysis allows drawing the same conclusions as for dependences PSNR-HVS-M k (n) in Figure 11b. For none original test image, its visual quality is perfect (all MSSIM values are smaller than 0.99). For almost all the test images, residual noise and distortions are visible after filtering. Thus, the metric MSSIM indicates the same as PSNR-HVS-M does.
Figure 13

MSSIM k ( n ) for MCN with σ 2 = 9 before (thin lines) and after filtering (a) and for CCN with σ 2 = 65.

Consider now the behavior of the parameter for a spatially correlated noise. Dependences for a particular case of CCN with σ2 = 65 are represented in Figure 13b. These dependences for color components are very similar. It is noted that, in contrast to AWGN case (see Figure 4), the values are about 1.55 for almost all test images (the values are a little bit smaller for the MCN case). It is noted that for σ2 = 6, the values are in the limits 1.24...1.93 for CCN and 1.15...1.96 for MCN. In any case, the main observation is that for spatially correlated noise, the values are considerably larger than unity.

Joint analysis of in Figures 4 and 13 indicates that its rather small values, e.g., not exceeding 1.05 are observed for simple-structure images corrupted by noise close to i.i.d. Then, it is reasonable to expect high efficiency of denoising by the standard DCT filter. In contrast, i.e., when exceeds 1.05...1.1, the situation is not so clear and it is difficult to distinguish complex-structure images corrupted by i.i.d. noise and images corrupted by spatially correlated noise. Then, it is worth estimating spatial spectrum of the noise. This can be performed automatically by the methods given in [35, 64]. More details on how to use such blind estimates in filtering can be found in the literature [35].

Thus, the image filtering can fully be blind (automatic). The first stage is to obtain and to compare it to the threshold (e.g., 1.08). Then, noise variance can be estimated if it is a priori unknown. After this, noise spatial correlation (spectrum) characteristics are to be determined if needed (if exceeds the threshold). Finally, the corresponding DCT-based filtering is to be applied with taking into account available or obtained data on noise variance and spatial correlation characteristics. The automatic procedure is described roughly and needs more thorough analysis in the future.

Let us give one example for illustrating an efficiency of the modified DCT filter. The noise-free test image #7 from TID 2008 which is a typical representative of middle complexity color images is shown in Figure 14a. It's noisy version (σ2 = 65, spatially correlated noise, CCN) is given in Figure 14b. Noise has an appearance typical for digital cameras of bad quality or operating in bad illumination conditions. The noise is well seen in homogeneous regions and its influence can be noticed in textural regions. The output of the MDCT-based filter adapted to spatial spectrum of noise (Figure 15a) is presented in Figure 14c. The noise is suppressed considerably although residual noise is visually seen compared to the image in Figure 14a. Useful information (edges, textures, details) is preserved well enough although there are some artifacts in the neighborhoods of sharp edges which are noticeable. This is in agreement with the value of PSNR-HVS-M approximately equal to 30.7 dB for all the color components (see data for the test image #7 in Figure 10b).
Figure 14

The test color image #7: noise-free (a), noisy (b) and filtered by the component-wise (c) and 3-D MDCT (d).

Figure 15

Matrices of ( W norm ( n , m )) 0.5 for CCN (a) and MCN (b).

We have also modified 3D DCT filter to be able to remove a spatially correlated noise. Modification is as follows. Decorrelation of color components is carried out first. Then, MDCT-based filter adapted to known 2D DCT spectrum of the noise is applied to the obtained components. Then, inverse 3-element DCT is performed. For the image filtered in this way, PSNR-HVS-M has been determined. It is equal to 32.07, 32.36, and 32.23 dB for R, G, and B components, respectively. As seen, these values are, on the average, by 1.5 dB larger than for MDCT-based filter applied component-wise. The obtained output image is represented in Figure 14d. Comparison of images in Figure 14c, d shows that the latter one has better visual quality. This one demonstrates more the benefits of vector (3D) processing of color images.

Concerning practical application of the obtained results, it is worth mentioning the following.

First, the methods of blind evaluation of noise variance and spatial spectrum [56, 57, 60, 64] have not been intensively tested for small values of noise variance (of the order 5...20 for images with 8-bit representation). It has been shown experimentally that the accurate estimation of noise characteristics for less intensive noise is a more complicated task than that for an intensive noise [44, 60]. Thus, such analysis should be done in the future and, possibly, a design of new, more efficient, blind methods will be needed.

Second, the results presented above rely on the simplified model of noise (pure additive). It is desirable to study what happens if this assumption is used in processing real-life data or to design modified methods able to take into account peculiarities of more realistic noise models [68].

Third, it is worth thinking how PSNR-HVS-M or MSSIM can be estimated without having a reference image. Such studies have already been done for the metric SSIM [69].

Fourth, although our studies deal with images with 8-bit representation, the same conclusions are valuable for images with wider dynamic ranges, i.e., hyperspectral data [4]. In a more general form, PSNR-HVS-M is defined as where D defines dynamic range of data representation. Then, if, e.g., PSNR-HVS-M k > 40 dB, then it is possible to suppose that a noise is invisible for a visualized k th component (subband) image of hyperspectral data. The simulation data provided above allow predicting when it is possible to expect considerable improvement of image quality due to a noise removal and when it is possible to skip filtering stage.

6. Conclusions

We have extensively tested filtering efficiency of DCT-based filters for color images of the database TID2008 and additional set of noisy images obtained using reference images of this database. The images were corrupted by AWGN and spatially correlated noise with a wide set of variance values. As it can be expected, filtering efficiency considerably depends on image complexity and noise variance. For rather simple images corrupted by AWGN with rather large variance, filtering efficiency is the highest according to the standard metric PSNR and the metric PSNR-HVS-M suited for characterizing visual quality. In cases of spatially correlated noise, high complexity of images and small values of noise variance, sufficient improvement of image quality due to filtering is problematic. It is shown that if AWGN variance is less than 20, noise is practically not seen. To be invisible in original images, spatially correlated noise should have about 3...4 times smaller variance. Residual noise and distortions because of filtering can become invisible after denoising if noise variance in original image is small enough and efficient filtering, e.g., vector or 3D filtering, is applied. In the future, it is worth paying more attention to efficiency analysis and performance improvement just for 3D filters.

Declarations

Authors’ Affiliations

(1)
National Aerospace University, Kharkov, Ukraine
(2)
Tampere University of Technology, Tampere, Finland

References

  1. Single-Sensor Imaging: Methods and Applications for Digital Cameras (Image Processing Series). Edited by: Lukac R. CRC Press; 2009:566.Google Scholar
  2. Theuwissen A: Course on Camera System. Lecture Notes (CEU-Europe, 2005) 2-5.Google Scholar
  3. Xiuping J, Richards JA: Remote Sensing Digital Image Analysis. 4th edition. Springer, Berlin; 2006:439.Google Scholar
  4. Chein-I Chang (Ed): Hyperspectral Data Exploitation: Theory and Applications. Wiley-Interscience; 2007.Google Scholar
  5. Kiranyaz S, Uhlmann S, Gabbouj M: Dominant Color Extraction Based on Dynamic Clustering by Multi-dimensional Particle Swarm Optimization. Proceedings of the Seventh International Workshop on Content-Based Multimedia Indexing, CBMI 2009 2009, 181-188.View ArticleGoogle Scholar
  6. Smolka B, Plataniotis KN, Venetsanopoulos AN: Nonlinear techniques for color image processing. In Nonlinear Signal and Image Processing: Theory, Methods, and Applications. Electrical Engineering & Applied Signal Processing Series. Volume Chapter 12. Edited by: Barner K, Arce G. CRC Press; 2003:560.Google Scholar
  7. Yu G, Vladimirova T, Sweeting MN: Image compression systems on board satellites. Acta Astronautica 2009, 64: 988-1005. 10.1016/j.actaastro.2008.12.006View ArticleGoogle Scholar
  8. Elad M: Sparse and Redundant Representations. From Theory to Applications in Signal and Image Processing. Springer Science+Business Media, LLC; 2010:376.View ArticleGoogle Scholar
  9. Morillas S, Schulte S, Melange T, Kerre E, Gregori V: A soft-switching approach to improve visual quality of colour image smoothing filters. Proceedings of ACIVS, Springer Series on LNCS 2007, 4678: 254-261.Google Scholar
  10. Astola J, Kuosmanen P: Fundamentals of Nonlinear Digital Filtering. CRC Press LLC, Boca Raton; 1997.Google Scholar
  11. Mallat S: A Wavelet Tour of Signal Processing. Academic Press, San Diego; 1998.Google Scholar
  12. Plataniotis KN, Venetsanopoulos AN: Color Image Processing and Applications. Springer, New York; 2000:355.View ArticleGoogle Scholar
  13. Pitas I, Venetsanopoulos AN: Nonlinear Digital Filters: Principles and Applications. Kluwer Academic Publisher; 1990:392.View ArticleGoogle Scholar
  14. Hyvärinen A, Hoyer P, Oja E: Image denoising by sparse code shrinkage. In Intelligent Signal Processing. Volume 1. Edited by: Haykin S, Kosko B. IEEE Press, New York; 2001:554-568.Google Scholar
  15. Oktem R, Egiazarian K, Lukin V, Ponomarenko N, Tsymbal O: Locally adaptive DCT filtering for signal-dependent noise removal. EURASIP J Adv Signal Process 2007, 10. Article ID 42472Google Scholar
  16. Dabov K, Foi A, Katkovnik V, Egiazarian K: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process 2007,16(8):2080-2095.MathSciNetView ArticleGoogle Scholar
  17. Kervrann C, Boulanger J: Local adaptivity to variable smoothness for exemplar-based image regularization and representation. Int J Comput Vis 2008,79(1):45-69. 10.1007/s11263-007-0096-2View ArticleGoogle Scholar
  18. Liu C, Szeliski R, Kang SB, Zitnick CL, Freeman WT: Automatic estimation and removal of noise from a single image. IEEE Trans Pattern Anal Mach Intell 2008,30(2):299-314.View ArticleGoogle Scholar
  19. Barducci A, Guzzi D, Marcoionni P, Pippi I: CHRIS-Proba performance evaluation: signal-to-noise ratio, instrument efficiency and data quality from acquisitions over San Rossore (Italy) test site. Proceedings of the 3rd ESA CHRIS/Proba Workshop, Italy 2005, 11.Google Scholar
  20. Lim SH: Characterization of noise in digital photographs for image processing. Proceedings of Digital Photography II SPIE 6069:Google Scholar
  21. Foi A: Practical denoising of clipped or overexposed noisy images. In Proceedings of the 16th European Signal Processing Conference EUSIPCO 2008. Lausanne, Switzerland; 2008:1-5.Google Scholar
  22. Rabie T: Robust estimation approach to blind denoising. IEEE Trans Image Process 2005,14(11):1755-1766.View ArticleGoogle Scholar
  23. Van Zyl Marais I, Steyn WH, du Preez JA: On-board image quality assessment for a small low earth orbit satellite. Proceedings of the 7th IAA Symposium on Small Satellites for Earth Observation 2009.Google Scholar
  24. Lukin VV, Abramov SK, Ponomarenko NN, Uss ML, Zriakhov MS, Vozel B, Chehdi K, Astola J: Methods and automatic procedures based on blind evaluation of noise type and characteristics. SPIE J Appl Remote Sens 2011, 5: 053502. 10.1117/1.3539768View ArticleGoogle Scholar
  25. Lukin V, Ponomarenko N, Zelensky A, Astola J, Egiazarian K: Automatic design of locally adaptive filters for pre-processing of images subject to further interpretation. In Proceedings of 2006 IEEE Southwest Symp. on Image Analysis and Interpretation. Denver, USA; 2006:41-45.View ArticleGoogle Scholar
  26. Wang Z, Bovik AC: Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Process Mag 2009, 26: 98-117.View ArticleGoogle Scholar
  27. Wang Z, Simoncelli EP, Bovik AC: Multi-scale structural similarity for visual quality assessment. Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems and Computers 2003, 2: 1398-1402.Google Scholar
  28. Larson EC, Chandler DM: Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electron Image 2010,19(1):011006-1-011006-21.Google Scholar
  29. Ponomarenko N, Battisti F, Egiazarian K, Carli M, Astola J, Lukin V: Metrics performance comparison for color image database. In Proceedings of VPQM 2009. Scottsdale, USA; 2009:6.Google Scholar
  30. Pedersen M, Hardeberg JY: A New Spatial Hue Angle Metric for Perceptual Image Difference. Volume 5646. Springer Series on LNCS; 2009:81-90.Google Scholar
  31. Ponomarenko N, Silvestri F, Egiazarian K, Carli M, Astola J, Lukin V: On between-coefficient contrast masking of DCT basis functions. In Proceedings of the Third International Workshop on Video Processing and Quality Metrics. Scottsdale, USA; 2007:4.Google Scholar
  32. Gupta S, Kaur L, Chauhan RC, Saxena SC: A versatile technique for visual enhancement of medical ultrasound images. Digital Signal Process 2007,17(3):542-560. 10.1016/j.dsp.2006.12.001View ArticleGoogle Scholar
  33. Ponomarenko NN, Lukin VV, Zelensky AA, Koivisto PT, Egiazarian KO: 3D DCT based filtering of color and multichannel images. Telecommun Radio Eng 2008, 67: 1369-1392. 10.1615/TelecomRadEng.v67.i15.50View ArticleGoogle Scholar
  34. Phillips RD, Blinn CE, Watson LT, Wynne RH: An adaptive noise-filtering algorithm for AVIRIS data with implications for classification accuracy. IEEE Trans. GRS 2009,47(9):3168-3179.Google Scholar
  35. Lukin V, Ponomarenko N, Egiazarian K, Astola J: Adaptive DCT-based filtering of images corrupted by spatially correlated noise. Proceedings of SPIE Conference Image Processing: Algorithms and Systems VI, SPIE 2008, 6812: 12.Google Scholar
  36. Lukin V, Fevralev D, Ponomarenko N, Abramov S, Pogrebnyak O, Egiazarian K, Astola J: Discrete cosine transform-based local adaptive filtering of images corrupted by nonstationary noise. Electron Imaging J 2010,19(2):15.View ArticleGoogle Scholar
  37. Ponomareko N, Lukin V, Djurovic I, Simeunovic M: Pre-filtering of multichannel remote sensing data for agricultural bare soil field parameter estimation. In Proceedings of BioSense 2009. Novi Sad, Serbia; 2009:4.Google Scholar
  38. Foi A, Trimeche M, Katkovnik V, Egiazarian K: Practical Poissonian-Gaussian noise modeling and fitting for single image raw data. IEEE Trans Image Process 2007,17(10):1737-1754.MathSciNetView ArticleGoogle Scholar
  39. Ponomarenko NN, Lukin VV, Egiazarian K, Lepisto L: Color image lossy compression based on blind evaluation and prediction of noise characteristics. In Proceedings of the Conference Image Processing: Algorithms and Systems IX. Volume 7870. San Francisco, SPIE; 2011:12.Google Scholar
  40. Ponomarenko N, Carli M, Lukin V, Egiazarian K, Astola J, Battisti F: Color image database for evaluation of image quality metrics. Proceedings of International Workshop on Multimedia Signal Processing, Australia 2008, 403-408.Google Scholar
  41. Slone RM, et al.: Assessment of visually lossless irreversible image compression: comparison of three methods by using an image comparison workstation. Radiology 2000, 217: 772-779.View ArticleGoogle Scholar
  42. Sermadevi Y, Masry M, Hemami SS: MINMAX rate control with a perceived distortion metric. In Proceedings of the SPIE Visual Communications and Image Processing. San Jose, CA; 2004.Google Scholar
  43. Chatterjee P, Milanfar P: Is denoising dead? IEEE Trans Image Process 2010,19(4):895-911.MathSciNetView ArticleGoogle Scholar
  44. Zoran D, Weiss Y: Scale invariance and noise in natural images. Proceedings of IEEE 12th International Conference on Computer Vision 2009, 2209-2216.Google Scholar
  45. Ponomarenko N, Krivenko S, Lukin V, Egiazarian K, Astola J: Lossy compression of noisy images based on visual quality: a comprehensive study. EURASIP J Adv Signal Process 2010, 976436: 13. [http://www.hindawi.com/journals/asp/aip.976436.html]Google Scholar
  46. Astola J, Haavisto P, Neuvo Y: Vector median filters. Proc IEEE 1990, 78: 678-689. 10.1109/5.54807View ArticleGoogle Scholar
  47. Lukin VV, Oktem R, Ponomarenko N, Egiazarian K: Image filtering based on discrete cosine transform. Telecommun Radio Eng 2007,66(18):1685-1701. 10.1615/TelecomRadEng.v66.i18.70View ArticleGoogle Scholar
  48. Sheikh HR, Sabir MF, Bovik AC: A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 2006,15(11):3440-3451. (see for more details ) http://live.ece.utexas.edu/research/quality/subjective.htm (see for more details )View ArticleGoogle Scholar
  49. Zhu X, Milanfar P: Automatic parameter selection for denoising algorithms using a no-reference measure of image content. IEEE Trans Image Process 2010,19(2):3116-3132.MathSciNetGoogle Scholar
  50. Danielyan A, Foi A: Noise variance estimation in non-local transform domain. Proceedings of LNLA 2009, 41-45.Google Scholar
  51. Lukin V, Abramov S, Ponomarenko N, Egiazarian K, Astola J: image filtering: potential efficiency and current problems. In Proceedings of ICASSP. Prague, Chech Republic; 2011:4.Google Scholar
  52. Fevralev D, Ponomarenko N, Lukin V, Egiazarian K, Astola J: Efficiency analysis of DCT-based filters for color image database. In Proceedings of the Conference Image Processing: Algorithms and Systems IX. Volume 7870. San Francisco, SPIE; 2011:12.Google Scholar
  53. Dabov K, Foi A, Katkovnik V, Egiazarian K: Color image denoising via sparse 3D collaborative filtering with grouping constraint in luminance-chrominance space. In Proceedings of IEEE Int. Conf. on Image Process, ICIP 2007. San Antonio, TX, USA; 2007:313-316.Google Scholar
  54. Uss ML, Vozel B, Lukin V, Chehdi K: Potential MSE of color image local filtering in component-wise and vector cases. Proceedings of CADSM 2011, 11.Google Scholar
  55. Sendur L, Selesnick IW: Bivariate shrinkage with local variance estimation. IEEE Signal Process Lett 2002,9(12):438-441. 10.1109/LSP.2002.806054View ArticleGoogle Scholar
  56. Lukin VV, Abramov SK, Uss ML, Marusiy IA, Ponomarenko NN, Zelensky AA, Vozel B, Chehdi K: Testing of methods for blind estimation of noise variance on large image database. Book chapter. Theoretical and Practical Aspects of Digital Signal Processing in Information-Telecommunication Systems, Russia 2009, 43-70.Google Scholar
  57. Vozel B, Abramov S, Chehdi K, Lukin V, Ponomarenko N, Uss M: Blind methods for noise evaluation in multi-component images. Book chapter. Multivariate Image Processing, France 2009, 261-299.Google Scholar
  58. Lukin V, Abramov S, Ponomarenko N, Uss M, Vozel B, Chehdi K, Astola J: Processing of images based on blind evaluation of noise type and characteristics. Proceedings of SPIE-ERS, B, Berlin 2009, 7477: 12.Google Scholar
  59. Lukin V, Zriakhov M, Krivenko S, Ponomarenko N, Miao Z: Lossy compression of images without visible distortions and its applications. CD ROM Proceedings of ICSP, Beijing 2010, 4.Google Scholar
  60. Lukin VV, Abramov SK, Zelensky AA, Astola J, Vozel B, Chehdi B: Improved minimal inter-quantile distance method for blind estimation of noise variance in images. Proc SPIE 2007, 6748: 12.Google Scholar
  61. Kendall MG: Advanced Theory of Statistics. Volume 1. Charles Griffin & Company, London; 1945.Google Scholar
  62. Tsymbal OV, Lukin VV, Ponomarenko NN, Zelensky AA, Egiazarian KO, Astola JT: Three-state locally adaptive texture preserving filter for radar and optical image processing. EURASIP J Appl Signal Process 2005, 8: 1185-1204.View ArticleGoogle Scholar
  63. Ponomarenko N, Krivenko S, Egiazarian K, Lukin V, Astola J: Weighted mean square error for estimation of visual quality of image denoising methods. In CD ROM Proceedings of VPQM. Scottsdale, USA; 2010:6.Google Scholar
  64. Ponomarenko NN, Lukin VV, Egiazarian KO, Astola JT: A method for blind estimation of spatially correlated noise characteristics. In Proceedings of SPIE Conference Image Processing: Algorithms and Systems VII. Volume 7532. San Jose, USA; 2010:12.Google Scholar
  65. Deergha R, Swamy MNS, Plotkin E: Adaptive filtering approaches for colour image and video restoration. Proc IEE-VISP 2003, 150: 168-177.Google Scholar
  66. Solbo S, Eltoft T: A wavelet domain filter for correlated speckle, Proceedings of EUSIPCO. 2006, 4.Google Scholar
  67. Goossens B, Pizurica A, Philips W: Removal of correlated noise by modeling the signal of interest in the wavelet domain. IEEE Trans Image Process 2009,18(6):1153-1165.MathSciNetView ArticleGoogle Scholar
  68. Foi A: Practical denoising of clipped or overexposed noisy images. In Proceedings of the 16th European Signal Process. Conf., EUSIPCO 2008. Lausanne, Switzerland; 2008:1-5.Google Scholar
  69. Rehman A, Wang Z: Reduced-reference SSIM estimation. Proceedings of ICIP 2010, 289-292.Google Scholar

Copyright

© Fevralev et al; licensee Springer. 2011

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.