Image Compression and Denoising based on



ABSTRACT:

An algorithm is described for simultaneously compressing and denoising images. The algorithm is called tree-adapted wavelet shrinkage and compression (TAWS-Comp). TAWS-Comp is a synthesis of an image compression algorithm, adaptively scanned wavelet difference reduction (ASWDR), and a denoising algorithm, tree-adapted wavelet shrink age (TAWS). As a Compression procedure, TAWS-Comp inherits all of the advantages of ASWDR: its ability to achieve a precise bit-rate assigned before compressing, its scalability of decompression, and its capability for enhancing regions-of interest. Such a full range of features has not been available with previous compressor plus denoiser algorithms. As a denoising procedure, TAWS-Comp is nearly as effective as TAWS alone. TAWS has been shown to have good performance, comparable to state of the art denoisers. In many cases, TAWS-Comp matches the performance of TAWS while simultaneously performing compression. TAWS-Comp is compared with other combined compressor/denoisers, in terms of error reduction and compressed bit-rates. Simultaneous compression and denoising is needed when images are acquired from a noisy source and storage or transmission capacity is severely limited (as in some video coding applications). An application is described where the features of TAWS-Comp as both compressor and denoiser are exploited.

Keywords: image compression; image denoising; signal processing; video coding.

1. INTRODUCTION

We shall describe an algorithm for simultaneously compressing and denoising an image corrupted by additive random noise. The classical approach to compressing a noisy image—motivated by results from rate distortion theory [1], [2] — is to use a two-step process. In the first step, the noisy image is denoised, while in the second step the denoised image is compressed. There are situations, however, where limited processor resources call for the greater efficiency afforded by combining compressing with denoising. Combining compressing with denoising is particularly apt for situations, such as some video coding applications, that satisfy the following three conditions: (1) the image source is noisy, (2) the time for processing is short and/or processor power is limited, and (3) the transmission channel capacity is low. Condition (1) calls for denoising, condition (3) calls for compressing, and condition (2) calls for a simultaneous denoising and compressing in order to save time and/or free the processor for other tasks. From this point on, we shall refer to such a combined compressor plus denoiser as a compdenoiser. The paper is organized as follows. In section 2, we describe the TAWS method. This section begins with a brief summary of the compression method from which TAWS is derived. Understanding the rationale behind this compression technique helps in understanding how TAWS combats the ringing artifacts that wavelet shrinkage suffers from. It also helps in understanding how TAWS dynamically adapts the sizes of thresholds in relation to specific image features. In section 3, we report on denoisings of test images. A comparison of TAWS with various state of the art denoising methods is done in this section. The paper concludes in section 4 with a brief discussion of future work.

2. THE TAWS METHOD

The TAWS denoising method is built upon the ideas involved in a new lossy image codec called Adaptively Scanned Wavelet Difference Reduction (ASWDR), which is described in [1]. Here is a summary of this method, with some amendments made to adapt to the noise removal case:

Step 1. Perform a wavelet transform of the discrete image, {f [i, j]}, producing the transformed image, {f [i, j]}, for the denoising reported on in this paper, the Daub 9/7 transform [5] was used exclusively.

Step 2. Use a scanning order for searching through the transformed image. This is a one -to- one and onto mapping {f [i, j]} =x[k], whereby the transform values are scanned through via linear ordering k=1, 2…, M. The value of M being the number of pixels in the image. Initially, the scanning order is a zigzag through sub bands, from lower to higher [6].

Step 3. Choose an initial threshold, T, such that at least one transform value x[n] say, satisfies | x[n] | = T, and all transform values, x[k],satisfy | x[k] | < 2T.

Step 4 (Significance pass). Determine new significant index values—i.e., those new indices m for which x[m] has a magnitude greater than or equal to the present threshold. Assign a value q[m]= Tsign(x[m]) as the quantized value corresponding to the transform value x[m].

Step 5 (Refinement pass). Refine quantized

transform values corresponding to old significant transform values. Each refined value is a better approximation to the exact transform value. More details of this step will be provided below.

Step 6 Assume k= (T/2) ^ k = aGv for the first k cycles through Steps 4–6. For cycles k+1 to k+D, scan only through children corresponding to significant parents in the highest sub bands ( levels 1 to D), while following the original recipe of step 6 lower, sub bands.

Step 7. Divide the present threshold by 2 and repeat steps 4–7 until this new threshold is less than a present value .G A few remarks are in order to clarify the procedure outlined above. First, in the refinement pass, the precision of quantized values is increased to make them at least as accurate as the present threshold. For example, if an old significant transforms magnitude | x[m] | lies in the interval [32,48) , say, and the present threshold is 8 , then it will be decided at this step if its magnitude lies in [32,40) or [40,48) . In the latter case, the new quantized value becomes 40sign(x[m]), and in the former case, the quantized value remains 40sign(x[m]). The refinement pass is therefore simply the bit-plane encoding method used by most state of the art compression algorithms [7]. Second, when the whole procedure is finished, then a final step of further refinement can be done. The quantized values for all the significant coefficients (those whose magnitudes are at least as large as G) can be further refined by successive divisions of G by _ and executing the refinement step (but not the significance step, no new significant values are added). In practice, we have found that five more refinements are usually sufficient for further improvement of signal to noise e ratio. (Even when these further refinements are omitted—which is done when including TAWS as an option within the compression procedure ASWDR—there is usually only a small effect on signal to noise ratio.) Finally, all the quantized values are set at the midpoints of the quantization bins, and an inverse transform is performed (after shrinkage, which we discuss below) to produce a denoised image. Third, and most importantly, the change in scanning order described in Step 6 is a new technique in compression. Unlike most state of the art compression methods, ASWDR is not a zero-tree method. Rather, it utilizes an adaptive scanning designed to efficiently encode the exact positions of significant transform values. It is shown in [1] that this adaptive scanning enables ASWDR to retain more significant details in compressed images. The TAWS method aims to utilize this improved approximation property of ASWDR in order to facilitate noise removal in thresholding of wavelet transforms. TAWS is a much simpler, low-complexity procedure (requiring O (M) operations). A TAWS denoising of a 512x512 image typically requires just a few seconds on a PC (200 MHz with 32MB RAM). Thus TAWS is more advantageous in situations where speed is critical and/or compression is needed. TAWS bears some relation to the more recently proposed universal HMT (uHMT) method of [4], which is a low-complexity HMT algorithm. Both TAWS in addition, uHMT use properties of correlations across scales in quad-trees of wavelet coefficients. TAWS, however, is a deterministic procedure, unlike uHMT which is probabilistic. Furthermore, in TAWS the inter-layer correlations between significant wavelet transform values are based on half-threshold correlations between parents and children (more on this below), not on same threshold correlations as in the HMT procedure. The TAWS method of denoising is a modification of the classic method of wavelet shrinkage denoising. Wavelet shrinkage is described in [2] as a nearly optimal method for removing additive Gaussian noise. We assume that a given discrete image is contaminated by additive i.i.d. Gaussian noise n. That is, the contaminated image g is related to the original image f by g=f+n, where the noise

Values, n [i, j], are independent random variables with underlying distributions that are all zero mean Gaussian normal of variance s^2. In [2] and [12], a method known as wavelet shrinkage is proven asymptotically nearly optimal for removing i.i.d. Gaussian noise. That is, the values {g [i, j]} of the wavelet transformed noisy image are subjected to the following shrinkage function (where G is the threshold):

[pic]

When applied to NxN images, where a separable wavelet transform is performed by 1D transforms on rows and columns, the threshold G is chosen to be

[pic]

After applying the shrinkage function, an inverse wavelet transform is applied to produce the denoised image. This method is dubbed the VisuShrink method in [2]. As shown in [12], the VisuShrink method is a nearly optimal method for removing Gaussian noise. The TAWS denoising has an SNR of _ 9#!_ dB which is only slightly below that of the 2-level version. There is a degree of subjectivity here as to which of the TAWS denoisings is better, but clearly they are both superior to the VisuShrink denoisings. [ht] To be more precise, the shrinkage is performed only on wavelet (detail) sub bands, the alllowpass sub band is left unaltered. With a 2- or 3- level wavelet transform, the noise energy within the all-low pass sub band is usually small enough to be ignored. The TAWS method is a combination of wavelet shrinkage with the predictive apparatus underlying Step 6 in the ASWDR method described above. This predictive apparatus makes use of an assumed correlation between significant transform values at a threshold T and their children at threshold T/2. In [13] it is shown that there is a high correlation between significant transform values, whose magnitudes are at least _, and significant child transform values, whose magnitudes are at least T. In the TAWS method, there are three parameters, which are set at the start. One parameter is the descent Index D, which is a non-negative integer. The cutoff threshold G is then set at

[pic]

where the height index a satisfies a =1. For all of the TAWS

denoisings reported on in this paper, the value of this second parameter a was set as v2 . In Figure 2(b), for example, the descent index is D=2. so be G = v 2Gv /4. Note that if D=0 and a=1, then G = Gv, and TAWS reduces to the VisuShrink method. The third parameter is a depth index D, which is an integer lying between 1 and L, where L is the number of levels in the wavelet transform. We shall clarify below the nature of the depth indexD. The TAWS method consists of applying Steps 1 through 7 above, but with Step 6 modified as follows. In this step the integer _ is defined via the equation

[pic]

That K is an integer can be arranged by a rescaling of wavelet transform values.

Step 6 (TAWS). For the first 4-6 cycles through Steps 4–6, produce a new scanning order using the Step 6 described above in the ASWDR method. For cycles K+1 to K+D, scan only through children corresponding to significant parents in the highest sub bands (levels 1 to D), while following the original recipe of Step 6 in the ASWDR algorithm for the other, lower, sub bands. From this description we can see that D must be no larger than L . For all of the TAWS denoisings reported on below, it was found that setting D equal to the smaller of the values of D and L produced the best results. Step 6 (TAWS) is phrased in terms of adapting a scanning order. Since a scanning order is a process of scanning through insignificant transform values in order to select those having magnitude above the present threshold, this provides a selection procedure for distinguishing transform values dominated by noise from those dominated by image. Unlike VisuShrink and cycle-spin thresholding, which use uniform thresholds for all transform values, the TAWS selection procedure uses different thresholds (aG v, aG v/2,… aG v/2D) which vary in their spatial locations, based on the positions in the scan order of the transform values to which they are applied. Such a spatially adaptive thresholding gives TAWS a greater flexibility than a uniform thresholding method. In the next section, we shall demonstrate that the TAWS procedure outperforms VisuShrink, and is competitive with other, more state of the art, denoising methods.

3. SIMULATION RESULTS

In section 2 above, we gave one example of the performance of TAWS on denoising a noisy version of the Boats image. It performed significantly better than VisuShrink, both in terms of SNR and subjective visual quality. We shall now examine how TAWS performs on denoising the standard test images—Lena, Goldhill, Boats, and Barbara—contaminated with additive i.i.d. Gaussian noise of various standard deviations. We shall compare its performance on these test images with VisuShrink and with the following state of the art denoising methods: localized Wiener filtering [14], cycle -spin thresholding, and the uHMT methods. In Table 1, we show a summary of how VisuShrink and TAWS performed in denoising these test images. In each case we list the highest SNR values for each method. For instance, for the first entry, the Lena image contaminated with noise of standard deviation 8, a 1-level transform produced the best VisuShrink denoising so we list that SNR value. Likewise, a 3-level transform with parameter values D=3 and D=3 produced the best SNR value for TAWS. It is clear from the table that TAWS performs significantly better than VisuShrink on all of the test images. We illustrated above, with Figures 1 and 2, that TAWS also produces denoised images with better visual quality than VisuShrink denoised images.

[pic]

Table 1: Comparison of VisuShrink and TAWS denoisings (s is standard deviation of added Gaussian noise). The values of are the number of wavelet transform levels. The values of D and D are for the TAWS descent and depth indices, respectively.

The VisuShrink method has been improved upon by several methods. Among the best are the following: MatLab’s Wiener method [14], the cycle-spin thresholding method [3], and the HMT methods [11], [10], and [4]. MatLab’s Wiener method utilizes the classic Wiener filtering procedure on 3x3 sub images of the noisy image. The cycle-spin thresholding method averages hard thresholdings (all wavelet coefficients below the VisuShrink threshold G v are set to zero) of all cyclic shifts (M shifts for images having M pixels) of an image and averages the results. The HMT methods utilize a Bayesian probabilistic approach in connection with Markov relations for the significance states, relative to thresholding, of the nodes in quad trees of wavelet coefficients. The original HMT method, described in [11], is prohibitively time consuming. In fact, in [10] and [15] it is stated that it can take from 15 minutes to several hours to denoise 512x512 _ images. As pointed out in [4], the universal HMT (uHMT) method was developed in order to circumvent these considerable time costs of the original HMT method. For these reasons, we only compare TAWS with uHMT, since they are both low complexity 0 algorithms. There is also a cycle spin averaged version of uHMT, which averages uHMT denoisings of all cyclic shifts of the noisy image. We shall refer to this method, as uHMTspin The TAWS-spin method requires only a modest increase in memory resources, just two extra arrays equal in size to the image array for holding the image shifts and for storing partial averages. In columns 3–6 of Table 2 we list the SNR values for denoising the same test images as instable 1 for each of the methods—VisuShrink, MatLab’s Wiener, uHMT, and TAWS—which do not make use of cycle-spin averaging. In columns 7–9, we list the SNR values for the methods 10 cycle-spin, uHMT-spin, and TAWS spin — which do utilize cycle-spin averaging. For the four non-averaged methods, the largest SNR value for each of the four tested methods is displayed in boldface. Likewise, the largest SNR value for each of the three cycle-spin averaged methods is displayed in boldface. We first discuss the non-averaged methods, and then treat the cycle-spin averaged methods. In every case, the TAWS method employs the same values for D, D and L , as were used in the denoisings summarized in Table 1. The cycle spin thresholding method uses for each image the value of used for the VisuShrink denoisings in Table 1. We shall explain below which values of, and were used in the TAWS-spin denoisings. For the non-averaged methods, the data in Table 3 show that no one method is superior among the three best (Wiener, uHMT, and TAWS). Wiener and TAWS yield higher SNR values for low or moderate noise levels. On the other hand, uHMT performs better with higher noise levels, much better than Wiener and moderately better than TAWS. Note that TAWS and Wiener significantly outperform uHMT on the Barbara denoisings. The reason for this is that uHMT is based on a single set of prior assumptions for the for the image data. More precisely, uHMT assumes a particular rate of decay of wavelet coefficients across scales. The default choice of this decay rate specified in the MatLab code in [15] and derived in [10], is appropriate for smoother images (images with very little energy in higher spatial frequencies) like Lena. It is not appropriate for an image like Barbara, which has much greater energy in higher spatial frequencies. This explains why uHMT performs so well on denoising the Lena images, but not so well on the Barbara images. Since SNR values are not precisely equivalent to human perception, it is important to examine some denoised images using these methods. We show in Figure 2, the Wiener and uHMT denoisings of the noisy image from Figure 1(b), as well as the VisuShrink and TAWS denoisings discussed above. Although the Wiener denoising has the highest SNR, it also exhibits a large amount of residual, low frequency noise, which makes it perceptually inferior to both the TAWS and uHMT denoisings. This is even more clearly revealed by comparing the magnification of the Wiener denoised image, with the magnifications of the uHMT and TAWS denoisings, Comparing the TAWS and uHMT denoisings in Figure 2, the TAWS denoising appears sharper, more focused, although it retains some residual noise

[pic]

Figure 1: (a) Boats image. (b) Noisy images = 20: SNR = 17.3dB. (c) VisuShrink denoising, 2 levels: SNR = 21.8 dB (d) TAWS denoising, 2 levels, D=2, D=2: SNR = 23.3 dB

[pic]

Figure 2: (a) VisuShrink denoising, 3 levels, SNR=20.8 dB. (b) TAWS denoising, 3 levels, D=2, D=2; SNR=23.1dB. (c) Wiener denoising, SNR=23.6 dB. (d) uHMT denoising, SNR=23.5dB.

4. CONCLUSION

We have described two new methods for

denoising images, the TAWS method and the TAWS-spin method, which provide denoisings comparable to state of the art methods. A problem for future research is how to choose the parameters and used in the TAWS and TAWS spin [pic]

Table 2: Comparison of various denoisings (? is standard deviation of added Gaussian noise). Key: VS is VisuShrink, W is MatLab’sWiener method, HMT is the uHMT method, TAWS is the TAWS method, C-S is the cycle-spin thresholding method, H-S is the uHMT-spin method, and TS is the TAWS-spin method.

method (and how to choose the parameter in the TAWS method). One possibility is an initial analysis of the statistical trends at higher thresholds (several standard deviations above the noise), in order to predict a good ending threshold. Better predictive schemes than the parent-child scheme employed at present should also be investigated, as they will reduce the number of isolated, residual, noise spots that appear in TAWS denoisings (as well as increase the SNR). For example, use might be made of a lack of clustering [11] in higher spatial frequency sub bands (this is the basis for the isolation filter described above in section 3). Finally, integration of TAWS with ASWDR into a combination compressor/denoiser is an important goal for future research.

REFERENCES

[1] J.S. Walker. A lossy image codec based on adaptively scanned wavelet difference reduction. To appear in Optical Engineering.

[2] D. Donoho and I. Johnstone. Ideal spatial

adaptation via wavelet shrinkage. Biometrika,Vol. 81, pp. 425–455, Dec. 1994.

[3] R. Coifman and D. Donoho, Translationinvariant denoising. In Wavelets and Statistics, Lecture Notes in Statistics, Springer, 1995.

[4] J.K. Romberg, H. Choi, and R.G. Baraniuk, Shift-invariant denoising using waveletdomain hidden Markov trees. Proc.33rdAsilomar Conference, Pacific Grove, CA, October1999.

[5] A. Cohen, I. Daubechies, and J.-C. Feauveau . Biorthogonal bases of compactly supported wavelets. Commun. on Pure and Appl. Math., Vol. 45, pp. 485-560, 1992.

[6] J.M. Shapiro. Embedded image coding using zero trees of wavelet coefficients. IEEE Trans.Signal Proc., Vol. 41, No. 12, pp. 3445– 3462, Dec. 1993.

[7] A. Said andW. A. Pearlman. A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. on Circuits and Systems for Video Technology, Vol. 6, No. 3, pp. 243–250, June 1996.

[8] S. Servetto, J.R. Rosenblatt, and K. Ramchandran, A binary Markov model for the quantized wavelet coefficients of images and its rate/distortion optimization. Proc. of ICIP 1997, Vol. 3, pp. 82–85.

[9] S. G. Chang, B. Yu, and M. Vetterli, Image denoising via lossy compression and wavelet thresholding. Proc. of ICIP 1997, Vol. 1, pp. 604–607.

[10] J.K. Romberg, H. Choi, and R.G. Baraniuk, Bayesian tree-structured image modeling using wavelet-domain hidden Markov models. Proc. SPIE Technical Conference on Mathematical Modeling, Bayesian Estimation, and Inverse Problems, pp. 31–44, Denver, July 1999.

[11] M.S. Crouse, R.D. Nowak, and R.G. Baraniuk, Wavelet-based statistical signal processing using hidden Markov models. IEEE Trans. Signal Proc., Vol. 46, No. 4, pp. 886– 902, April 1998.

[12] D. Donoho, I. Johnstone, G. Kerkyacharian, and D. Picard. Wavelet shrinkage: asymptopia? J. Royal Stat. Soc. B, Vol. 57, No. 2, pp. 301– 369, 1995.

[13] R.W. Buccigrossi and E.P. Simoncelli,

Image compression via joint statistical characterization in the wavelet domain. IEEE Trans. on Image Proc., Vol. 8, No. 12, pp. 1688– 1701, 1999.

[14] MatLab, a product of The Math Works, Inc. The localized Wiener method is called wiener2 in MatLab, and is part of MatLab’s Image Processing Toolbox.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download