1 - WSEAS



Denoising of Images – a Comparison of Different Filtering Approaches

PIOTR BOJARCZAK1, STANISLAW OSOWSKI2

1Department of Automatics and Telematics of Transport

Radom University of Technology

Ul.Malczewskiego 29, 26-600 Radom

POLAND

2Department of Theory of Electrical Engineering, Measurement and Information Systems

Warsaw University of Technology

Pl. Politechniki 1, 00-661Warsaw

POLAND

Abstract: The paper will compare different filtering techniques for reduction of noise that don’t need to know in advance the spectral properties of the data. They will be based either on the averaging techniques (low-pass mean and median filter) or on the application of the compression/decompression of the noisy data.

Key-Words: denoising, wavelet transform, PCA network, low-pass mean filter, median filter

1 Introduction

The elimination or reduction of the noise is an important subject in such areas as medicine, sonar or radar imaging, since practical digital images are often degraded to some extent and need to be restored to improve their quality [2,3,11]. There are many different filtering algorithms for the noise removal, following from the Wiener or Kalman filter theory [1]. However to get good results of filtering using these methods we have to know in advance the spectral properties of the noise free data and the noise itself. The other methods rely on different kinds of filtering of the data [3,7,10,11,12] without any special knowledge of the noise.

The paper will compare different filtering techniques for reduction of random noise that don’t need to know in advance the spectral properties of the data. They will be based either on the averaging techniques (low-pass mean filter, median filter) or on the application of the compression and decompression

Filtering techniques based on averaging

2.1 Low-pass mean filtering

Because of the simplicity this method is very often used. The spatial low-pass filter uses the averaging operator defined in some neighborhood of the considered pixel [2,3]. By limiting ourselves to the constant elements of the mask we can define the averaging operator in a [pic] neighborhood by the matrix h

[pic] (1)

(reconstruction) of the noisy data (principal component analysis, wavelet transform).

In the filtering process the transformed intensity of the pixel in (m, n) position of the image is calculated as the 2-dimensional

convolution of the original image and the averaging operator h, i.e.,.

[pic] (2)

The mask is moved to the next pixel location in the image and the calculating process is repeated until the mask reaches the end of the image. Thanks to the shape of the frequency characteristics, the filter removes higher frequency components, which correspond to the small details of the processing image including also the noise.

2.1 Median filters

The main drawback of the previous filter is the blurring process accompanying filtering. Some improvement of averaging may be obtained by applying the median filtering. The median filter considers each pixel in the image in turn and looks at its nearby neighbors to decide whether or not it is representative of its surroundings [2,14]. Instead of simply replacing the pixel value with the mean of neighbor’s pixel values, it is replaced with the median of those values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel under consideration with the middle pixel value. It turns out that this method is particularly effective when the noise consists of strong spike-like components (known as salt and pepper noise).

3 Filtering methods based on lossy compression

Let fn and rn denote the sample sequences composed of n independent samples of the image function f and random variable r, respectively. These sequences can be mathematically presented as [pic] and [pic]. Let [pic] denotes the sequence of data fn corrupted with noise, characterized by the vector rn. Let us introduce the notion of strength of the noise, understood here as the norm [pic].

The elimination of noise is achieved in the algorithm through the lossy compression and then decompression of the noisy signal [pic] [7]. At high compression ratio Kr the noise introduced by coding/decoding plus additional random noise contaminating the data is relatively high. At small compression ratio both signal and noise are passing through the filter almost unchanged and no effect of filtering is observed. To obtain good results of noise elimination we have to find some compromise point, at which the attenuation of noise is high enough and coding/decoding error is on the acceptable level. This is the breaking point of the rate - distortion characteristics, corresponding to the PSNR value equal to the strength value of the noise, corrupting the data. Knowing the value of the noise strength it is enough to adjust the compression ratio corresponding to this value.

3.1 Principal component analysis approach to the noise reduction

The principal component analysis (PCA) is the statistical method defining the linear transformation of the form [9]

y=Wx (3)

transforming the stationary stochastic data [pic] into the vector [pic] using the matrix [pic]in such a way that the output space y of the reduced dimension K ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download