User’s Documentation



User’s Documentation

Project 2: Phong Shading

Phase 2-Z-Buffering, Complex Planes, and Textures

Jeffrey C. Keller

Description: The purpose of this program is to render objects that appear to be three-dimensional. This program will render these images utilizing Phong shading. Phong shading is the technique of calculating the various three-dimensional visual components (ambient, diffuse, and specular) by interpolating the normal to a surface for each point on that surface. Implemented in this build are such features as Z-buffering, the ability to render patches with more than three edges, and the ability to render textures on objects. The two main types of textures implemented are painted textures and functional textures. Both of these models can either be painted onto the object or bump mapped. Additionally, the user has the option to either stretch or tile the textures read from files. The user has various command line options, as well as the ability to create the data files that store the information used to render the objects on the screen. Details on how this is accomplished can be found in the rest of this document.

Execution: Executing the program is straightforward. The executable file name is ‘phong.’ To render the data found in the file ‘data1,’ type ‘phong data1.’ The user also has the option of changing the entire window size, as well as the width and height separately. To change the size proportionally, the option ‘-s’ followed by an integer is used. Please note that the integers signify pixel values. To change the width, the option ‘-sx’ followed by an integer is entered. To change the height, follow the instructions from the previous line, except type ‘-sy’ followed by an integer. Of important note is that the data file containing the data must be the final item in the command line. For example, if we wanted to render the information in file ‘test’ at a size of 400X400, we would type ‘phong –s 400 test.’ If we desired to render the information in the file ‘test2’ in a window of height 300 and width 500, we would type ‘phong –sx 500 –sy 300 test2.’ If either the height or width is not specified, a default of 512 pixels is used.

Input: The only input that this program requires is the data file with the information needed to render the spheres and patches. This file also includes information about the light sources and the textures. Please note that the order of these objects in the file does not matter. The format of the information pertaining to light sources is “L x y z i”, where x, y, z, and i are all doubles. X, y, and z represent the point in 3-space where the light originates, and i is its intensity. Important note: there can be at most five light sources. This was decided for the reason that if we include more light sources the images loose quality and become ‘washed.’

The format of the lines pertaining to spheres are:

‘S TexOpt TileOpt Kar Kag Kab Kdr Kdg Kdb Ks n’

‘x y z radius’

As before the first letter (S) is a literal, TexOpt and TileOpt are integers, and all of the other variables are doubles. TexOpt signifies the texture options for the sphere. If TexOpt equals 0, the sphere does not have any texture applied to it. If TexOpt is between 1 and 10, a texture is painted onto it. If TexOpt is between 11 and 20, a texture is bump mapped onto the sphere. For the above two cases, the actual texture to be utilized is given by the second (right) digit of the integer. So, for TexOpt = 11, the sphere would have the texture denoted by ‘1’ bump mapped. For TexOpt 21 or 31, the sphere has wood functional texture painted or bump mapped, respectively. For TexOpt 22 or 32, the sphere has marble functional texture painted or bump mapped, again respectively. TileOpt signifies the end appearance of the texture. A ‘1’ denotes a stretched texture, and a ‘2’ signifies a tiled texture. While this number must be specified for all spheres, it is not used for spheres that are functionally textured. The value of all of the ‘K’ variables should be between 0.0 and 1.0. The ‘K’ variables are used to signify the different constants used in calculating the color of the sphere, and ‘n’ is the exponent of the specular component. X, y, and z represent the center of the sphere in 3-space, and ‘radius’ is the radius of the sphere.

The formats of the lines pertaining to patches are:

‘P TexOpt TileOpt N Kar Kag Kab Kdr Kdg Kdb Ks n’

N X (‘x y z normx normy normz’)

As before the first letter (P) is a literal, TexOpt and TileOpt are integers, and all of the other variables are doubles. The options for TexOpt and TileOpt are the same as spheres, so for details on these variables please consult the previous paragraph. The ‘N’ variable signifies how many edges the patch has. The values of all the ‘K’ variables should again be between 0.0 and 1.0. As before, the ‘K’ variables are used to signify the different constants used in calculating the color of the patch. The next lines specify the edges of the patch. There is one line per edge. Each line contains the coordinates (in 3-space) of the edge, as well as the values for the normal at that point (not normalized.) Since there may be more than three points per patch, each patch is not necessarily flat.

The formats of the lines pertaining to textures are:

‘T number filename’

The first letter (T) is a literal. ‘Number’ stands for the identifier of the texture. There can be at most ten textures read into the system. This number is used to determine which texture will be either bump mapped or painted in the case that TexOpt is greater than 0 and less than 21. ‘Filename’ is the file in which the texture is stored. Of important note is that the texture file must be a binary file, specified in column-major order. When designing datafiles, it is desirable to specify the textures last in the file, in logical increasing order.

Output: After the data file is read in to memory, the program will open a window and begin to render the images as they appear in the data file. The Z-buffer is used at this point to check if one object is positioned in back of another. If this is the case and the object closest to the viewer is specified first in the data file, the object that is positioned behind the first object is not rendered. This is done to save time. After the images are rendered, they will stay on the screen until the window is closed by the user.

Ending the Program: To end the program close the window in which the objects are drawn.

Known Issues: As of now there are no known issues with this software, and there have been no bugs reported. If you discover a bug, please report it by sending e-mail to kellerj@barada.canisius.edu. Also, watch barada.canisius.edu/~keller for any updates to this package. Thank you for your purchase.

System’s Documentation

Project 2: Phong Shading

Phase 2-Z-Buffering, Complex Planes, and Textures

Jeffrey C. Keller

Description: The purpose of this program is to extend on the first project. This program will render images that appear to be three-dimensional with the help of Phong shading. Phong shading is the technique of calculating the various three-dimensional visual components (ambient, diffuse, and specular) by interpolating the normal to a surface for each point on that surface. Implemented in this build are such features as Z-buffering, the ability to render patches with more than three edges, and the ability to render textures on objects. The two main types of textures implemented are painted textures and functional textures. Both of these models can either be painted onto the object or bump mapped. Additionally, the user has the option to either stretch or tile the textures read from files. The user has various options from the command line, as well as the ability to create the data files that store the information used to render the objects on the screen. Details regarding the command line options and data file format can be found in the user documentation. This system’s manual will give the particulars of the internals of this program.

Overall System Design: The basic design of the system is thus: Phong.c++ is the main driver of the program. It first checks the command line arguments and draws the window, with or without the specified parameters. Next, the driver opens the data file and reads in all of the light sources contained within. At this point it also takes note of the number of textures specified. After all of the light sources have been read, the driver closes and opens the file (to reset the file pointer), and begins to read in the textures. Once this has been accomplished, again the file is closed and opened so that the spheres and planes can be read in. It processes these images in the order they are specified in the data file. For each image, the appropriate constructor is called and the object is rendered. It is at this point that the driver determines whether or not the surface has a texture associated with it, as well as if that texture should be stretched or tiled. The driver then passes along the texture information to the appropriate render function. After all of the images have been rendered, the driver waits for the user to close the display window. The other classes are self-explanatory. The light source class handles the light source by specifying its location in 3-space, and the light source’s intensity. The sphere class contains the location of the center of the sphere, its radius, as well as the various constants used in rendering the object via phong shading. The plane class contains the locations of the various points used in denoting the plane, as well as the constants used in calculating the image’s shading. The buffer class handles the implementation of the z buffer, which is used to calculate depth (for more information see the user’s manual.) The texture class handles regular painted textures and bump mapping. The functexture class handles both functional painted textures and functional bump mapping.

Data Structure Choices: For this program there were not any issues where the choosing of appropriate data structures came into play. Not including the classes themselves, the only data structures used were dynamic and static arrays. The sphere and plane classes had arrays that were used to store the various constants used in calculating the phong shading. The driver program contained an array to handle the light sources. The plane class had a few dynamic arrays that handled the various slopes between points so that interpolation could be performed. The sphere class did not make use of dynamic arrays, but utilized static arrays for some of the constants. The phong class also had a dynamic array to store the textures. Finally, the buffer class also had a dynamic array to store the information about z-values. The dynamic arrays were used in locations where the size of the array is only known at execution time, dependent upon the datafile being rendered.

Design Details: As mentioned before, phong.c++ is the main driver file of this program. As it is straightforward, there were no real design issues that need to be discussed with this file. The file establishes a view vector at . Next it loops through the command line arguments to check for options. If any valid arguments are given, the appropriate measures are taken. Then we open the file with the information in it, and loop through it twice. The first loop we read in all of the light sources and store them in an array. We also count the number of textures so that we can establish an array to hold them. We go through the file next storing the textures. Finally, we go through the file for the last time, looking for spheres and planes, and render them as soon as we read them from the file. Then we wait for the window to be closed by the user.

The sphere class is a little more involved in that the render functions are complex. The only other function of this class is the constructor. There are actually three render functions in the sphere class. The first render function renders the sphere without a texture. The second render function renders the sphere with a texture from a file, whether it be painted or bump mapped. The third render function renders the sphere with a functional texture. All of these render functions could have been condensed into one general render function, but in order to aid in readability and understandability three separate functions were written. In general, the render function takes in as parameters the display window to be drawn to, the viewpoint, and the array of light sources. Then the render function goes into a for loop based on the x values for the sphere. We then launch into another for loop that is based on the y values for the sphere. If the pixel we are currently looking at is ‘inside’ the boundaries for the sphere, we then calculate the ‘z’ value for that pixel. If that ‘z’ value is closer to the viewer than any other ‘z’ value on the screen thus far (by use of the z buffer), we calculate the normal and current view vectors, and go into another for loop. This for loop is based on the number of light sources we have, and for each one adds the specular and diffuse components for the pixel. After we have done this for each light source, we then draw the pixel on the screen and move on to the next pixel. The algorithms for the boundary checking and ‘z’ value is based on the formula (x-xc)^2 + (y-yc)^2 + (z-zc)^2 = R^2. This function varies for the textures in a few subtle ways. For one, with both of the painted textures we call the appropriate function (based on whether the texture is from a file or function), and use the resultant array for the ambient components of the sphere. In the other case we again call the appropriate function (if the texture is from a file or function), and use the resultant array (which has been converted to a vector) to perturb the normal at each point. It is in this way that we create the appearance of bumps, hence ‘bump-mapping.’

The plane class, like the sphere class, is complex only when it comes to the render functions. Again, there are three render functions, as in the case of the sphere class. They are divided in a similar manner for similar reasons, and we therefore will not discuss what each different function does as we did with the sphere class. To find these details please reference the previous paragraph. The general render algorithm runs as follows. Most of the busy work is actually done by the constructor. The constructor runs through the points, storing them in a dynamic array. It also at this time calculates the slopes between the points, and stores that into another dynamic array. The constructor does this for both the vertices and the normals at these vertices. Hence, the constructor gives us an array of slopes for both the vertices and the normals of these points. It also gives us the minimum and maximum y values. The render function loops through these values from the minimum to the maximum, using the scan-line technique. For each y value, the algorithm loops through the edges of the plane, determining if the edge intersects the current raster. If it does, the point of intersection as well as the normal to that point are interpolated and stored. Once we have looked at all the edges, we have a left and right point on the current raster. We then interpolate across the raster, first finding the ‘z’ value of the point. If the z is ‘closer’ to the viewer than any other z currently on the screen (by use of the z-buffer), then we proceed to render the object. We then make use of the phong calculations described in the sphere render function. We also take into account any texturing that must be done accordingly.

The texture class is used when the object must be rendered with a texture from a file. This class does painted textures as well as bump mapping. The constructor of this class takes in the filename and reads it into a dynamic array. Of important design note is that the texture file is in column-major order. Besides the constructor, there are only two other functions to this class. The first is responsible for painted textures. It takes into account the starting and ending x and y values of the object to either stretch or tile the texture, according to the value passed in for the variable ‘sORt.’ If the texture is to be stretched (sort=1), the location passed in is converted to a percentage and the relevant values are passed back. If the texture is to be tiled, the values read from the file are determined by a line that loops around the file’s width and height (by used of the modulo operation.) As stated before, the values from the file are passed back through the array ‘color.’ For bump mapping, the function is similar to the above function. The only difference is that the information is passed back through the vector ‘change,’ and the values read from the file must be scaled from 0-255 to -.125-+.125.

The functexture class is more straightforward than the texture class. The user can either render using a functional texture that approximates wood grain or marble. The two functions written for this project are utilized for either a bump mapping or a painted texture. In the case of a painted texture, the appropriate function is called (either marble or wood grain), and the result is passed back via an array. In the case of a bump map, again the appropriate function is called whether it is a wood grain or marble texture. Before the values can be passed back via the vector, they must be scaled from 0-1 to -.125-+.125. The vector is then passed back via reference.

Compiling Instructions: To compile simply use the makefile included with the program. To use this file type ‘make.’ That should be sufficient. Depending on the location of the standard libraries in your system you may have to make some minor modifications to the makefile. After the makefile is executed, to run the program follow the instructions outlined. The files needed for compilation are as follows:

Phong.c++

Plane.h and plane.c++

Sphere.h and sphere.c++

Vector.h and vector.c++

Lightsource.h and lightsource.c++

DisplayWindow.h and displayWindow.c++

Texture.h and texture.c++

Functexture.h and functexture.c++

Buffer.h and buffer.c++

Errors and Limitations: There are no known issues/errors, and no bugs have been reported. The only limitation of this program is that there can be at most five light sources, and only spheres and patches can be rendered. The patches can have at most 10 edges, and there can be at most 10 texture files specified in the datafile.

Suggested Improvements: The only improvement that can be suggested at this time is to include shadows. Since shadows are very complex, there was not enough time to implement them.

Testing Information: This product was tested with multiple data files. Some of these data files have been included in the boxed product for the user to experiment. As this was a continuation of the first project, those test cases specified in the previous user’s manual were again run. In addition, the next feature to be tested was the z-buffer. This was first tested for spheres, and then spheres and patches. After the z-buffer functionality passed testing, the next feature to be tested was textures. For all testing of textures, spheres were first tested, then planes. This was done because the sphere rendering functions are easier, and any errors created would be easier to find and correct. First painted textures were tested. Next, bump mapping was testing. Finally, both functional painted textures and functional bump mapping were tested. All of these test cases were compared to known output and passed, so the correctness of the program can be ascertained to be adequate.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download