Thirdyearengineering.weebly.com



ASSIGNMENT NO.1TITLE: OPERATION ON IMAGEPROBLEM STATEMENT:Write C++ Program with GUI to capture using remotely placed camera and read uncompressed TIFF Image to perform following functions (Menu Driven) Use of Overloading and Morphism is expected. Image Frame1 is used for displaying Original Image and Image Frame 2 is used for displaying the action performed.Sharpen the ImageBlur the Image (Programmable rectangular Seed)Programmable image Contrast and BrightnessRotate image by programmableAngleConvolution(overloading: FFT, Other)HistogramMean and Standard Deviation of imageOBJECTIVE:1. To learn the basic concept of image processing.2. To learn the about basic operation on image.3. To generate histogram, convolution and to calculate Mean and Standard Deviation of imagePRIREQUISITES:Software: visual studia10, openCV, Window 7.Hardware:Pentium processor VI.THEORY:Sharpen the ImageHuman perception is highly sensitive to edges and fine details of an image, and since they are composed primarily by high frequency components, the visual quality of an image can be enormously degraded if the high frequencies are attenuated or completed removed. In contrast, enhancing the high-frequency components of an image leads to an improvement in the visual quality. Image sharpening refers to any enhancement technique that highlights edges and fine details in an image. Image sharpening is widely used in printing and photographic industries for increasing the local contrast and sharpening the images.In principle, image sharpening consists of adding to the original image a signal that is proportional to a high-pass filtered version of the original image. Figure (fig. 1) illustrates this procedure, often referred to a un sharp masking on a one-dimensional signal. As shown in Fig (fig. 1), the original image is first filtered by a high-pass filter that extracts the high-frequency components, and then a scaled version of the high-pass filter output is added to the original image, thus producing a sharpened image of the original. Note that the homogeneous regions of the signal, i.e., where the signal is constant, remain unchanged. The sharpening operation can be represented byWhere xi,j is the original pixel value at the coordinate (i ,j), F(.) is the high-pass filter, λ is a tuning parameter greater than or equal zero, and the sharpened pixel at the coordinate (i ,j) The value taken by , λ depends on the grade of sharpness desired. Increasing λ yields a more sharpened image.If color images are used and λ are three-component vectors, whereas if gray-scale images are used and are single-component vectors. Thus the process described here can be applied to either gray-scale or color images, with the only difference being that vector filters have to be used in sharpening color images whereas single-component filters are used with gray-scale images. The key point in the effective sharpening process lies in the choice of the high-pass filtering operation. Traditionally, linear filters have been used to implement the high-pass filter; however, linear techniques can lead to unacceptable results if the original image is corrupted with noise. A tradeoff between noise attenuation and edge highlighting can be obtained if a weighted median filter with appropriated weights is used. To illustrate this, consider a WM filter applied to a gray-scale image where the following filter mask is used. (M-1)Fig .1 Because of the weight coefficients in Eq. (M-1), for each position of the moving window, the output is proportional to the difference between the center pixel and the smallest pixel around the center pixel. Thus, the filter output takes relatively large values for prominent edges in an image, and small values in regions that are fairly smooth, being zero only in regions that have a constant gray level. Although this filter can effectively extract the edges contained in an image, the effect that this filtering operation has over negative-slope is different from that obtained for positive-Slope edges. Note: A change from a gray level to a lower gray level is referred to as a negative-slope edge, whereas a change from a gray level to a higher gray level is referred to as a positive-slop edge.] Since the filer output is proportional to the difference between the center pixel and the small pixel around the center, for negative-slope edges, the center pixel small values producing small values at the filter output. Moreover, the filter output is zero if the smallest pixel around the center pixel and the center pixel have the same values. This implies that negative-slope edges are not extracted in the same way as positive-slope edges. To overcome this limitation the basic image sharpening structure shown in Figure (fig. 1) must be modified such that positive-slope edges as well as negative-slope edges are highlighted in the same proportion. A simple way to accomplish that is: (a) extract the positive-slope edges, and then filter the preprocessed image with the filter described above; (c) combine appropriately the original image, the filtered version of the original image, and the filtered version of the preprocessed image to form the sharpened image. Fig 2Thus both positive-slope edges and negative-slope are equally highlighted. This procedure is illustrated in Figure (fig 2), where the top branch extracts the positive-slope edges and the middle branch extracts the negative-slope edges. In order to understand the effects of edge sharpening, a row of a test image is plotted in Figure (fig 3) together with a row of the sharpened image when only the positive-slope edges are highlighted, Figure (fig 3), only the negative-slope edges are highlighted, Figure (fig 3), and both positive-slope and negative-slope edges are jointly highlighted, Figure (fig 3) In Figure (fig 2). λ 1 and λ2 are tuning parameters that control the amount of sharpness desired in the positive-slope direction and in the negative-slope direction, respectively. The values of λ 1 and λ2 are generally selected to be equal. The output of the pre filtering operation is defined as with M equal to the maximum pixel value of the original image. This prefiltering operation can be thought of as a flipping and a shifting operation of the values of the original image such that the negative-slope edges are converted in positive-slope edges. Since the original image and the pre-filtered image are filtered by the same WM filter, the positive-slope edges and negative-slopes edges are sharpened in the same way.Fig 3Blur the Image (Programmable rectangular Seed)Blurring images so they become fuzzy may not seem like a useful operation, but actually is very useful for generating background effects and shadows. It is also very useful for smoothing the effects of the 'jaggies' to anti-alias the edges of images, and to round out features to produce highlighting effects. Blurring is so important it is an integral part of Image Resizing, though a different method of blurring, which is restricted to within the boundaries of a single pixel of the original image. There are two general image blurring operators in ImageMagic. The "-gaussian-blur" spread and "-blur". The results of the two as very close, but as "-blur" is a faster algorithm, it is generally preferred to the former even though the former is more mathematically correct. (See Blur vs the Gaussian Blur Operator.) Blur/Gaussian ArgumentsThe arguments for "-blur" and "-gaussian-blur" are the same, but to someone new to image processing, the argument values can be confusing. -blur {radius}x{sigma}The important setting in the above is the second sigma value. It can be thought of as an approximation of just how much you want the image to 'spread' or blur, in pixels. Think of it as the size of the brush used to blur the image. The numbers are floating point values, so you can use a very small value like '0.5'. The first value radius is also important as it controls how big an area the operator should look at when spreading pixels. This value should typically be either '0' or at a minimum double that of the sigma. To show you the effects of the options let’s take this simple image, with a lot of surrounding space (blur operators need lots of room to work), and create a table of the results for various operator settings. I also purposely used a font that contains both thick and thin lines see the fuzzing of small line details and large areas of color. A small radius limits any effect of the blur to pixels that are within that many pixels of the one being blurred (a square radius). As such using a very small radius such as '1' effectively limited the blurring to within the immediate neighbors of each pixel. Note that while sigma is a floating point, radius is not. If a floating point value is given (or internally calculated) it is rounded up to the nearest integer, to determine the 'neighborhood' of the blur. How much each neighbor contributes to the final result is still controlled by the sigma. A very small sigma (less than ‘1’) limits their contribution to a small amount, while a larger sigma contributes more equal amounts from all the neighbors. The largest sigma of '65355' will produce a simple averaging of all the pixels in the square neighborhood. Also notice that for smallish radius but a large sigma you see artifacts appear in the blurred result. This is especially visible in the output for "-blur?5x8". This is caused by the small square neighborhood 'cutting off' the area blurred, producing sudden stops in the smooth Gaussian curve of the blur, and thus producing Ringing Artifacts along sharp edges. Never use a radius smaller than the sigma for blurs The ideal solution is to simply set radius to '0x' as shown by the last line of the above table. In that case the operator will try to automatically determine the best radius for the sigma given. The smallest radius IM would use is 3, and is typically 3 * sigma for a Q16 version of IM (a smaller radius is used for IM Q8, as it has less precision). The only time I would use a non-zero radius was for a very small sigma or for specialized specialized blurs.When possible use a radius of zero for blurring operations Small values for the sigma are typically only used to fuzz lines and smooth edges on images for which no anti-aliasing was used (see Anti-Aliasing for more info). In that situation I find a blur of '1x0.3' a useful value to remove most of the 'jaggies' from images. Large values however are useful for producing fuzzy images, for backgrounds or shadow effects (see Compound Fonts), or even image highlighting effects (as shown thought the Advanced Examples page). Due to the way IM handles 'x' style of arguments, the sigma in the above is optional. However it is the more important value, so it should be radius that is optional, as radius can be automatically determined. As such the single value argument to these types of convolution operators is useless. This is unlikely to change as it has been this way for a very long time, and would break too many things.Programmable image Contrast and BrightnessAn image must have the proper brightness and contrast for easy viewing. Brightness refers to the overall lightness or darkness of the image. Contrast is the difference in brightness between objects or regions. For example, a white rabbit running across a snowy field has poor contrast, while a black dog against the same white background has good contrast. Figure shows four possible ways that brightness and contrast can be misadjusted. When the brightness is too high, as in (a), the whitest pixels are saturated, destroying the detail in these areas. The reverse is shown in (b), where the brightness is set too low, saturating the blackest pixels. Figure (c) showsthe contrast set to high, resulting in the blacks being too black, and the whites being too white. Lastly, (d) has the contrast set too low; all of the pixels are a mid-shade of gray making the objects fade into each other. Figures and illustrate brightness and contrast in more detail. A test image is displayed in Fig., using six different brightness and contrast levels. Figure shows the construction of the test image, an array of 80?32 pixels, with each pixel having a value between 0 and 255. The background of the test image is filled with random noise, uniformly distributed between 0 and 255. The three square boxes have pixel values of 75, 150 and 225, from left-to-right. Each square contains two triangles with pixel values only slightly different from their surroundings. In otherwords, there is a dark region in the image with faint detail, this is a medium region in the image with faint detail, and there is a bright region in the image with faint detail.Figure shows how adjustment of the contrast and brightness allows different features in the image to be visualized. In (a), the brightness and contrast are set at the normal level, as indicated by the B and C slide bars at the left side of the image. Now turn your attention to the graph shown with each image, called an output transform, an output look-up table, or a gamma curve. This controls the hardware that displays the image. The value of each pixel in the stored image, a number between 0 and 255, is passed through this look-up table to produces another number between 0 and 255. This new digital number drives the video intensity circuit, with 0 through 255 being transformed into black through white, respectively. That is, the look-up table maps the stored numbers into the displayed brightness. Figure (a) shows how the image appears when the output transform is set to do nothing, i.e., the digital output is identical to the digital input. Each pixel in the noisy background is a random shade of gray, equally distributed between black and white. The three boxes are displayed as dark, medium and light, clearly distinct from each other. The problem is, the triangles inside each square cannot be easily seen; the contrast is too low for the eye to distinguished these regions from their surroundings. Figures (b) & (c) show the effect of changing the brightness. Increasing the brightness shifts the output transform to the left, while decreasing the brightness shifts it to the right. Increasing the brightness makes every pixel in the image appear lighter. Conversely, decreasing the brightness makes every pixel in the image appear darker. These changes can improve the view ability of excessively dark or light areas in the image, but will saturate the image if taken too far. For example, all of the pixels in the far right square in (b) are displayed with full intensity, i.e., 255. The opposite effect is shown in (c), where all of the pixels in the far left square are displayed as blackest black, or digital number 0. Since all the pixels in these regions have the same value, the triangles are completely wiped out. Also notice that none of the triangles in (b) and (c) are easier to see than in (a). Changing the brightness provides little (if any) help in distinguishing low contrast objects from their surroundings. Figure (d) shows the display optimized to view pixel values around digital number 75. This is done by turning up the contrast, resulting in the output transform increasing in slope. For example, the stored pixel values of 71 and 75 become 100 and 116 in the display, making the contrast a factor of four greater. Pixel values between 46 and 109 are displayed as the blackest black, to the whitest white. The price for this increased contrast is that pixel values 0 to 45 are saturated at black, and pixel values 110 to 255 are saturated at white. As shown in (d), the increased contrast allows the triangles in the left square to be seen, at the cost of saturating the middle and right squares.Figure (e) shows the effect of increasing the contrast even further, resulting in only 16 of the possible 256 stored levels being displayed as no saturated. The brightness has also been decreased so that the 16 usable levels are centered on digital number 175. The details in the center square are now very visible; however, almost everything else in the image is saturated. For example, look at the noise around the border of the image. There are very few pixels with an intermediate gray shade; almost every pixel is either pure black or pure white. This technique of using high contrast to view only a few levels is sometimes called a gray scale stretch. The contrast adjustment is a way of zooming in on a smaller range of pixel values. The brightness control centers the zoomed section on the pixel values of interest. Most digital imaging systems allow the brightness and contrast to be adjusted in just this manner, and often provide a graphical display of the output transform. In comparison, the brightness and contrast controls on television and video monitors are analog circuits, and may operate differently. For example, the contrast control of a monitor may adjust the gain of the analog signal, while the brightness might add or subtract a DC offset. The moral is, don't be surprised if these analog controls don't respond in the way you think they should.Rotate image by programmable angleThe rotation operator performs a geometric transform which maps the position (x1,y1)of a picture element in an input image onto a position (x2,y2) in an output image by rotating it through a user-specified angle θabout an origin ? In most implementations, output locations (x2,y2)which are outside the boundary of the image are ignored. Rotation is most commonly used to improve the visual appearance of an image, although it can be useful as a preprocessor in applications where directional operators are involved. Rotation is a special case of affine transformation.The rotation operator performs a transformation of the form:Where (x1,y2) are the coordinates of the center of rotation (in the input image) and θ is the angle of rotation with clockwise rotations having positive angles. (Note here that we are working in image coordinates, so the y axis goes downward. Similar rotation formula can be defined for when the y axis goes upward.) Even more than the translate operator, the rotation operation produces output locations(x2, y2) which do not fit within the boundaries of the image (as defined by the dimensions of the original input image). In such cases, destination elements which have been mapped outside the image are ignored by most implementations. Pixel locations out of which an image has been rotated are usually filled in with black pixels. The rotation algorithm, unlike that employed by translation, can produce coordinates (x2, y2) which are not integers. In order to generate the intensity of the pixels at each integer position, different heuristics (or re-sampling techniques} may be employed. For example, two common methods include: Allow the intensity level at each integer pixel position to assume the value of the nearest non-integer neighbor (x2,y2)Calculate the intensity level at each integer pixel position based on a weighted average of the n nearest non-integer values. The weighting is proportional to the distance or pixel overlap of the nearby projections.Convolution (overloading: FFT, Other)In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functionsf and g, producing a third function that is typically viewed as a modified version of one of the original functions, giving the area overlap between the two functions as a function of the amount that one of the original functions is translated. Convolution is similar to cross-correlation. It has applications that include probability, statistics, computer vision, image and signal processing, electrical engineering, and differential equations.The convolution can be defined for functions on groups other than Euclidean space. For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 10 at DTFT Properties.)? And discrete convolution can be defined for functions on the set of integers. Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing.The convolution of f and g is written f?g, using an asterisk or star. It is defined as the integral of the product of the two functions after one is reversed and shifted. As such, it is a particular kind of integral transform:While the symbol t is used above, it need not represent the time domain. But in that context, the convolution formula can be described as a weighted average of the function f(τ) at the moment t where the weighting is given by g(?τ) simply shifted by amount t. As t changes, the weighting function emphasizes different parts of the input function.HistogramThe purpose of a histogram (Chambers) is to graphically summarize the distribution of a univariate data set. The histogram graphically shows the following: center (i.e., the location) of the data; spread (i.e., the scale) of the data; skewness of the data; presence of outliers; and Presence of multiple modes in the data. These features provide strong indications of the proper distributional model for the data. The probability plot or a goodness-of-fit test can be used to verify the distributional model. The examples section shows the appearance of a number of common features revealed by histograms. The most common form of the histogram is obtained by splitting the range of the data into equal-sized bins (called classes). Then for each bin, the number of points from the data set that fall into each bin is counted. That isVertical axis: Frequency (i.e., counts for each bin)Horizontal axis: Response variable The classes can either be defined arbitrarily by the user or via some systematic rule. A number of theoretically derived rules have been proposed by Scott (Scott 1992).The cumulative histogram is a variation of the histogram in which the vertical axis gives not just the counts for a single bin, but rather gives the counts for that bin plus all bins for smaller values of the response variable.Both the histogram and cumulative histogram have an additional variant whereby the counts are replaced by the normalized counts. The names for these variants are the relative histogram and the relative cumulative histogram.There are two common ways to normalize the counts.The normalized count is the count in a class divided by the total number of observations. In this case the relative counts are normalized to sum to one (or 100 if a percentage scale is used). This is the intuitive case where the height of the histogram bar represents the proportion of the data in each class.The normalized count is the count in the class divided by the number of observations times the class width. For this normalization, the area (or integral) under the histogram is equal to one. From a probabilistic point of view, this normalization results in a relative histogram that is most akin to the probability density function and a relative cumulative histogram that is most akin to the cumulative distribution function. If you want to overlay a probability density or cumulative distribution function on top of the histogram, use this normalization. Although this normalization is less intuitive (relative frequencies greater than 1 are quite permissible), it is the appropriate normalization if you are using the histogram to model a probability density function.Mean and Standard Deviation of imageHe average brightness of a region is defined as the samplemean of the pixel brightness’s within that region. The average, ma, of the brightness’s over the pixels within a region is given by:Alternatively, we can use a formulation based upon the (normalized) brightness histogram, h(a) = *p(a), with discrete brightness values a. This gives:The average brightness, ma, is an estimate of the mean brightness, ua, of the underlying brightness probability distribution.Standard deviationThe standard deviation given by:Using the histogram formulation gives:The standard deviation, sa, is an estimate of the underlying brightness probability distribution. The various statistics are given in Table 1 for the image and the region shown in Figure 7.StatisticImageROIAverage137.7219.3Standard deviation49.54.0minimum56202Median141220Maximum241226Mode62220SNRNA33.3Table 1Conclusion: - In this way we perform various functions on image, Image Frame1 is used for displaying Original Image and Image Frame 2 is used for displaying the action performed on image. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download