Www.cs.cmu.edu



Extracting Parameters used by the Fire Wire Camera for Color Balancing an image

Prasanna Venkatesan MSIT/RT, cmu

e-mail: pvenkat1@andrew.cmu.edu

Abstract – A problem encountered when mosaicing images captured from multiple cameras arranged to give omni direction view is unnatural color texture fusion in the resulting mosaic. This is due to variations in lighting and camera settings (both manual and automatic). The mosaiced image has a color mismatch as the overlapped regions are exposed to different lighting conditions in the respective cameras and hence are represented differently. The following paper describes work done with regards to retrieving the color balanced parameters of the cameras used. The aim was to use the parameters to enable a more coherent mosaiced image.

INTRODUCTION

The experiment involves mosaicing of images obtained from four firei cameras arranged as shown in Fig 1attn. Capturing images from different view angles, with different camera settings creates color mismatch due to lighting and camera conditions as shown in Fig 2. The images shown are color balanced images using the cameras color balancing algorithm which we do not know. The accompanying software stores the image information in RGB format or YUV format. Each one of the cameras uses the same color balancing algorithm and use two color channels to color balance an acquired image Fig 3. Each one of the cameras points to different scenes in the environment, possibly with different illuminant conditions. The cameras color balance the acquired image by adjusting the two corresponding color channels depending upon the image format stored. The parameters can be different for each of these images, so the mosaiced image exhibits a color mismatch. The accompanying software provides for APIs through which one can set and retrieve the parameters for the respective channels. APIs to retrieve the color balancing parameters are non existent. In this paper Section II discusses the various methods attempted, for an appropriate solution. The algorithms implemented are discussed briefly. Shortcomings of each of the algorithms are mentioned. Section III explains properties of color balanced images obtained using the cameras algorithm. Section IV discusses the final solution. This is followed up by conclusion and future work.

Attention: All figures have been added at the end of the document.

Methods Attempted

In order to extract the color balancing parameters, different color balancing algorithms were attempted. The aim was to come up with an algorithm which would give results similar to that of the cameras algorithm. The camera parameters could be estimated using these parameters as reference. The following three algorithms were implemented and tested.

A Gray world

The gray world [1] approach assumes the average color of an image is some predefined value of “gray,” for example, half

the value of the maximum intensity for each color component, i.e., (128,128,128). Gray world assumption is that, given an image with sufficient amount of color variations, the average value of the red, green and blue components of the image should average out to a common gray value. Based on this assumption, image colors are corrected through the following normalization:

Rn = Ro * Mean ( Intensity )/ Mean (Ro)

Gn = Go * Mean ( Intensity )/ Mean (Go)

Bn = Bo * Mean ( Intensity )/ Mean (Bo) (1)

Where, (Ro, Go, Bo) is the original color and (Rn, Gn, Bn) represent the corrected color channels. The predefined value of grey is chosen to be the average intensity value. This method gives good results for images having good distribution of colors. The results are similar to the cameras color balanced image. For methods where there is too much of color imbalance the method fails as shown in fig 4. The reason is that the underlying assumption of sufficient amount of color variations fails in the above case.

White Patch

The white patch [1] approach is similar to the gray world method but assumes that the maximum value of each channel should correspond to white, i.e. (255, 255, 255). This translates to the assumption that each scene contains an object with a white reflectance. Image colors are corrected through the following normalization.

Rn = Ro *255/ Max (Ro)

Gn = Go *255/ Max (Go)

Bn = Bo *255/ Max (Bo) (2)

where (Ro, Go, Bo) is the original color and (Rn, Gn, Bn) represent the corrected color channel. The color correction is extremely poor using this method as shown in fig 5.

Polynomial Mapping

This algorithm was adapted from [2]. In [2] a polynomial

mapping is done between two images (to be mosaiced) to correct the color distorts arising due to illumination variation. For this principal regions are chosen in the overlapping regions of the two images. Principal regions are defined as smooth homogenous regions and are chosen by analyzing the color histograms of overlapping regions of two images. A polynomial transformation of degree (0,1,2) is done corresponding to number of peaks in the color histogram of principal regions.

Higher degree (>2) transformations result in out of gamut pixels and hence are not done. As an extension of the algorithm, the non color balanced image and the color balanced image using the cameras algorithm were chosen as the overlapped regions. The pixels in the two images were divided into 3 regions, corresponding to there distance from standard deviation +/- mean, standard deviation +/- 2*mean, standard deviation +/- 3*mean.

Let

Meancb – Mean of color balanced image.

Meanncb- Mean of non color balanced image.

Sdcb - Standard deviation of color balanced image.

Sdncb- Standard deviation of non color balanced image

Mcb1= Mean (Pixels (Meancb +/- Sdcb))

Mcb2= Mean (Pixels (Meancb +/- 2*Sdcb))

Mcb3= Mean (Pixels (Meancb +/- 3*Sdcb))

Mncb1= Mean (Pixels (Meanncb +/- Sdncb))

Mncb2= Mean (Pixels (Meanncb +/- 2*Sdncb))

Mncb3= Mean (Pixels (Meanncb +/- 3*Sdncb))

A1 = Mcb1/Mncb1

(3)

When n=0, A1,A2,A3 are chosen and the following transformation is applied.

Rn = Ro* A1

Gn = Go* A1

Bn = Bo* A1

When n=1 the following transformation is applied

MUa = [pic]

MUb = [pic]

The coefficient of one degree polynomial mapping function is given by

Co=MUa-1*MUb

Hence the mapping equation corresponding to transformation of non color balanced image to a resultant color balanced image is given by

Rn = [pic]*Co, Gn = [pic]*Co , Bn = [pic]*Co

(4)

When n=2 the following transformation is applied.

MUa = [pic]

MUb = [pic]

Co=MUa-1*MUb

Hence the mapping equation corresponding to transformation of non color balanced image to a resultant color balanced image is given by

Rn = [pic]*Co, Gn =[pic]*Co, Bn=[pic]*Co

(5)

In all the polynomial transformations Rn , Gn, Bn correspond to the resultant color balanced image. The Euclidean distance was calculated between the color balanced image (cameras algorithm) and resultant color image. The average of this distance matrix is taken. The degree of the polynomial was chosen using this metric. For example if the resultant image obtained using a linear transformation has a lesser average of the distance matrix, it is preferred over the constant transformation or quadratic transformation.

The results of this method vary depending upon the distribution of colors in the original image. Good distribution of colors in the original image results in good color correction Fig 6. Where as for an image with lesser colors out of gamut pixels are produced Fig 7. This is because of color correction in RGB color space. In RGB color space there are correlations between the different channels’ values. For example, the G and R values will be large if the B value is large. If the mapping increase the G values and reduce the B values, out-of-gamut may be produced.

Properties of cameras color balanced image

The cameras color balancing algorithm attempts to equalize the means of the respective color channels. The values are dependant on the illumination of the scene. This was found out by analyzing number of different captured scenes which were color balanced by the cameras algorithm. Hence for a color balanced image the means of the color channels would be equal. A plot of red component vs. green component and blue component vs. green component was done. The green component was taken as a base since the color balancing is done using only the red and blue channels.

The plots indicated that points can be fitted to a line, having a slope of 45 degrees and passing through coordinates (0,0), (128,128). Fig 8. For a non color balanced image, the means of the color channels are not equal. Also the plot of the red componenet vs. green component and blue component vs green component is scattered. Fig 9.

solution

As mentioned earlier the fire wire software provides for APIs to set and retrieve color channel values. Both the APIs accept the same arguments to them. The color channel flag along with the value to be set are passed as parameters to the SetCameraProperty API. For retrieving the set value the color channel flag along with a variable to store the retrieved value are passed.

FiSetCameraProperty (……..) – For setting value

FiGetCameraProperty (…….) – For retrieving the set value

Using these APIs the color channels can be accessed separately. The range of both the color channels is 0-255.

The following is a step by step description of implemented method.

1. Scan through range of color channel values.

2. Using Fire Wire APIs set values for color channels.

3. Use the FiSetCameraProperty and set the value for the red and blue color channels.

4. For each set value

Calculate Means of the 3 color channels

4. Calculate Mean ratios:

Mean (red)/Mean (green) , Mean (blue)/Mean (green)

5. If the two mean ratios to lie between 0.9 and 1.1 – use

FiGetCameraProperty and retrieve and store the corresponding pair of values.

6. From the stored values choose the pair of values whose corresponding mean ratios are nearest to one.

7. Retrieve the pair, as the camera balancing parameters.

Using this method, extracted of parameters which are similar to that used by the cameras color balancing algorithm. To verify that the values obtained are the color balancing parameters, the color channels are set with extracted values. Then the cameras automatic color balancing option is chosen. There would not be any change in the image once the automatic color balancing option is chosen, indicating that the parameters are indeed the cameras automatic color balancing parameters.

Conclusion & Future Work

The camera automatic color balancing parameters were successfully extracted. The implemented method relies heavily on the cameras inherent algorithm. It does not assess the quality of the color balanced image. There are cases where the cameras algorithm fails. For instance in the case of capturing a

uniformly colored scene say like a green carpet. The automatic color balancing option corrects for this and converts the greenish carpet into a reddish carpet. In essence the camera does not have the ability to distinguish between the inherent color of an object and a cast (super imposed dominant color) occurring because of the illuminant condition. In [4] a method is proposed which detects and classifies the the color cast of the input images as no cast, evident cast, ambiguous cast and intrinsic cast. Based upon the cast classification, color correction is done. For instance the green carpet would be classified as intrinsic cast and color correction wouldn’t have been done. A combination of this method and the implemented method would make the algorithm more robust.

References

[1] Yin, J. and Cooperstock, J.R. (2004) Color Correction Methods with Applications to Digital Projection Environments. Journal of the Winter School of Computer Graphics, Vol. 12, No. 3, pp. 499-506.

[2]Fast color correction using principal regions mapping in different color spaces Maojun Zhang*, Nicolas D. Georganas Distributed and Collaborative Virtual Environments Research Laboratory DISCOVER), University of Ottawa, Ottawa, Canada

[3]A new algorithm for unsupervised global and local color correction Alessandro Rizzi, Carlo Gatta, Daniele Marini July 2003  Pattern Recognition Letters,  Volume 24 Issue 11

[4]

[5]

Figures

[pic]

Fig 1 Arrangement of Fire wire cameras

[pic]

Fig 6 Polynomial mapping results

[pic]

[pic]

Fig 7 Out of Gamut pixels produced

Fig 7 Out of gamut pixels produced

[pic]

[pic]

-----------------------

Fig 8 Non color balanced image, Pixels are scattered.

[pic]

[pic]

Fig 5 poor color correction using white patch assumption

[pic]

[pic]

Fig 4 Improper color balance, gray world assumption in non color balanced image fails.

[pic]

Fig 3 Camera properties windows, the two color channels U/B and V/R are shown.

Fig 9 Properties of color balanced image, the plots of red component and green component as well as blue component and green component are shown. For a color balanced image least squares fit can be done.

Fig 2 Top row shows images from individual cameras. Bottom row shows mosaiced image. The color mismatch is highlighted

in the case of the red book and telephone, in mosaiced image. This is because of the exposure to different lighting conditions

in the individual images. In case of jacket there is no mismatch, as lighting exposure is similar.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download