Implementing and Comparing Different Lighting Models …



Implementing and Comparing Different Lighting Models Using Cg

By Joshua Andersen for Independent Study CS5950

Supervised by Chuck Hansen

University of Utah

April 30th 2003

Abstract

My intent is to show how I implemented three different lighting models under OpenGL using Cg and compare them to each other. Standard OpenGL uses a type of Phong lighting model, and a Gouraud shading model (per vertex color interpolation). This model can be directly implemented using a vertex program in Cg. The second model is using a Phong lighting model and a Phong shading model (per fragment color calculations) using both a vertex and a fragment program in Cg. The third lighting model is an anisotropic lighting model and a Gouraud shading using a vertex program in Cg.

Preface

I became interested in taking an independent study course after I finished CS6610

(Advanced Computer Graphics I). For the final project in the class I wanted to implement a per-fragment lighting model in OpenGL. After much research and work I was able to make a decent approximation using projective textures and register combiners (). While I was researching ways to do this in OpenGL I realized that using the up coming Nvidia high level graphics language Cg along with the up coming Geforce FX graphics boards I would be able to do a more robust and straight forward per-fragment lighting model. Since currently the University of Utah does not offer any advanced OpenGL graphics courses I followed Chuck Hansen’s advice to pursue further work in this area as an independent study course.

The Cg language is Nvidia’s high level programming language for modern graphics cards. Traditional graphics cards are limited to using fixed function calls through an API such as OpenGL or Direct3D. This allows for the graphics being rendered by these cards to be fast but it limits the flexibility of what a programmer can do with the hardware. Newer hardware such as Nvidia’s Geforce 3 cards and later have the ability to use assembly like programs on the vertex and pixel (fragment) parts of the graphics pipeline. These features allows much more flexibility to programmers by making parts of the pipeline programmable. The problem with this is that most programmers don’t want to program using an assembly language with syntax such as:

DPH result.position.x, v16.xyzz, c0[0];

DPH result.position.x, v16.xyzz, c0[1];

DPH result.position.x, v16.xyzz, c0[2];

DPH result.position.x, v16.xyzz, c0[3];

Cg is a high level language with syntax similar to C that will compile graphics programs such as:

oposition = mul( modelViewProj, tempPosition );

to the above assembly for different target graphics cards. To accomplish this the Cg compiler takes an ASCII input file along with a profile, such as VP30, FP30 (for Geforce FX) and VP20, FP20 (Geforce 3 and 4) and generates the proper assembly for that profile. This can be done dynamically or offline. A feature that I was able to use was the Cg call cgGLGetLatestProfile, this allows the program to dynamically check what the highest level of support the graphics card has, then with this information the Cg program can be compiled to that latest profile.

The Cg language allows for up to two types of programs to be active at any given time, one vertex program and one fragment program, if either or both are not active they are replaced by the standard OpenGL graphics pipeline. With an active vertex program loaded the standard part of the OpenGL pipeline dealing with transformation and lighting of vertices is replaced by the vertex program. The fragment program is similar though it handles operations dealing with fragments generated from the triangle rasterization after the vertex programs. All vertex programs are required output a position and all fragment programs are required to output a color. These programs can use various inputs such as constant (uniform) parameters, or varying parameters such as normal vectors, vertex colors and texture coordinates.

The Phong lighting model in OpenGL is computed using the following variables:

Diffuse, Specular, Ambient, Emissive: rgba, color, coefficients for the material.

Shininess: coefficient to control the radius of the specular highlight.

Eye position, Light position: The respective positions of the light and eye as vectors

Light color, Ambient color: These are rgba colors used to define the color of light Position, and Normal: Vectors used to show the position and orientation of the polygon. The following equations are used to compute the lighting in OpenGL.

L = (Light position – Position) normalized

V = (Eye Position – Position) normalized

Half angle = (L + V) normalized

Diffuse factor = the max of zero and the dot product of the Normal vector and L vector.

Specular factor = the max of zero and the dot product of Normal and Half angle to the Shininess power, or zero if Diffuse factor is less than or equal to zero.

Phong Color = Light color * (Diffuse factor * Diffuse + Specular factor + Specular) + Ambient * Ambient color + Emissive.

The result is a color that is close to the diffuse color when the cosine of the light and normal vector are close, and a reflective or specular highlight where the half angle and the normal vector are close to each other. The ambient term is added to approximate the light that does not directly hit the object but bounces off of other objects onto it. The emissive term is used to add a glow to objects that are supposed to be a light source. These equations can be built directly into a vertex program.

The Phong lighting model assumes that all polygons and materials have a surface that is smooth and that microscopic facets are uniform over the surface in respect to the way that light is reflected. It also assumes that no light is reflected from on object to another, more sophisticated models such as radiosity or raytracing are needed to simulate this.

The first light model I implemented was using the above Phong lighting calculation in a vertex program the result being the transformed position of the vertex and the color of the vertex. No fragment program was used so this information was passed directly to the standard OpenGL rasterizing engine, tested and then promoted to a group of pixels for every polygon.

The actual color for each fragment is calculated using Gouraud shading, or per vertex interpolation. A simple bounding box is used to check if a certain pixel is covered by the triangle using barycentric coordinates. Barycentric coordinates is a homogeneous coordinate system based on the three vertices of a triangle, using it the position of any point on the triangle can be uniquely defined, even negative values which would fall outside of the triangle. Consequently any pixel or fragment with a negative barycentric coordinate value is discarded. The ones that pass use these barycentric coordinate values to weight the color of the fragment. Barycentric coordinates have the property of adding the three values of any point results in one. So using this information the final color of a fragment is:

Color = Color of vertex1 * barycentric value of vertex1 + Color of vertex2 * barycentric value of vertex2 + Color of vertex3 * barycentric value of vertex3

Below are three tori rendered using the same light color but different materials.

[pic] [pic] [pic]

The big advantage to using Gouraud shading is that it is fast and most of the data needed to compute it is already available for triangle rasterization. It does however have some tradeoffs in respect to visual quality. The main problem is that when lighting a triangle a specular highlight might fall in the middle, and because of the vertex interpolation it would not show up in the rendering at all. This problem becomes even more pronounces with real-time animated graphics because the specular highlight could pop in and out depending on the current position of it.

A corollary problem is that specular highlights generally don’t interpolate smoothly and result in edge and discoloring problems that are quite noticeable to the human eye. The rendering on the left is a low tessellated sphere rendered with Gouraud shading, notice the lines that are formed in the center due to specular highlights.

To combat this problem models can be tessellated to use more polygons and thus have a smaller middle area for these discontinuities. Using very highly tessellated models is generally not an efficient way to render things in real-time however.

This is a more highly tessellated sphere rendered with the same material and shading model. Notice how the lines from the previous render are all but gone. The specular highlight is much closer to what we would expect, there are still some small imperfections making it look slightly angular in places with it’s color blending.

Another alternative to try and get a nicer looking specular highlight is to use Phong shading. The second lighting model using Cg use same calculations made from above to compute the lighting color. The main difference is that this is done on a per-fragment level. The vertex program used is basically a pass through function, it takes in a position and a normal vector, it computes the screen space position and passed the original position and normal through to the fragment program as texture coordinates. What this does is when the pixels are rasterized it interpolates the position and the normal vector between the three vertices of the triangle, then pass passes this data to the fragment program. The fragment program is practically identical to the vertex program in the first lighting model. By calculating the lighting value at every fragment the problems that arise from interpolation go away.

This is a rendering of the same lowly tessellated sphere using Phong shading. The specular highlight is very smooth and round, even more so than the highly tessellated sphere above. However the low level of polygons used still make the sphere look angular at the edges.

This is the same highly tessellated sphere used previous but this time rendered with Phong shading. There appear to be no flaws anywhere on the sphere. The specular highlight is smooth and round as well as the rest of the sphere.

Using Phong shading adds a great deal of quality to rendered models compared to Gouraud shading. The problem with using this in general is that it is much more expensive computationally because there are generally many more fragments than vertices for polygonal models. Using modern hardware such as the Geforce FX and presumably more powerful cards in the future with multiple, fast, programmable fragment pipelines this may become less of an issue over time.

The third lighting model I implemented is anisotropic lighting. This lighting model tries to change the assumption that the Phong lighting model makes about microscopic facet being uniform over the surface in respect to the light. This lighting model attempts to render materials that have a grain or directional bias in a certain direction. The lighting has an overall effect on the rotation of the object around its normal vector. Some materials that exhibit this property are CD’s, and polished metal.

This lighting is computed by the follow pseudocode:

Compute the world space position and normal vector.

Compute the half angle vector of the eye and light vector

Using two values, the light vector and the half angle vector both dotted with the world normal we generate a s,t pair used in looking up the resulting precomputed light in a 2D texture map. The highlight occurs when the dot products are close to each other, or in other words along the s,t diagonal. The texture values must be properly scaled and biased from the –1,1 range to 0,1.

These show the different lighting the first with Phong followed by ansiotropic

[pic] [pic]

The above algorithm is put directly into a vertex program and the resulting texture coordinate is passed to the standard OpenGL fragment engine. Low tessellation of models with this algorithm produces artifacts due to interpolation. While trying to get that anisotropic lighting to work properly I realized that if the light vector is not normalized the distance of the light to the surface creates more rings along the grain. This seemed to be an interesting effect so by adding a new parameter that scaled the light’s x,y,z coordinates before normalizing I was able to control the appearance of the lighting but still keep a normalized light so it remains constant compared to the actual light position in magnitude. Below are renderings of the same model and light position using this scaling factor between 1.0 and 2.5

[pic][pic][pic]

In conclusion by using Cg I have found that different lighting models can be implemented directly into the OpenGL pipeline and even switched dynamically. The performance of using these different lighting models is as follows: The standard vertex based Phong lighting compiled to a 48-instruction vertex program. The pre-fragment Phong lighting model compiled to a 6-instruction vertex program and a 40-instruction fragment program. The anisotropic lighting model compiled to a 28-instruction vertex program. Using a Geforce FX 5800 Ultra the per-fragment lighting model was significantly slower than the other two. This current hardware is slower on large fragment programs than vertex programs. Other extensions to these programs that could easily be added would be a fragment program to modulate the color of the anisotropic lighting resulting in a shiny metallic paint look. This anisotropic lighting model could also be moved into a fragment program as above for a per-fragment level result without the artifacts from interpolation.

[pic] [pic] [pic] double click for video or download here [pic]

References

various documentation on Cg and OpenGL

Randima Fernando, Mark J. Kilgard, The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics, Addison-Wesley 2003.

-----------------------

[pic]

[pic]

[pic]

[pic]

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download