(slide of



Polygons and Shading

Pixels and Vertex Shaders

Realism Effects

Types of Lighting

(slide with title, and then switch over to videogame clip)

Andrew: A major area of Digital Arts & Sciences today is Real-Time Rendering. Real-Time Rendering is the displaying of graphics according to real-time changes, usually manipulated by an outside source such as a person. As the name suggests, the picture one sees would be rendered (and constantly re-rendered) at that particular moment in time. This allows for any immediate changes by the user to be displayed instantaneously and usually in motion - which is simulated through the display of many rendered images over a short amount of time.

(Topics slide)

Tsoi: Our topics of discussion will include polygon and shading, vertex and pixel shaders, realism effects, and types of lighting.

(title with slide polygons and NURBS)

Simon: The backbone of 3D Graphics are Polygons and NURBS.

(Slide polygons)

Simon: The basic structure of a three-dimensional object is created by polygons. Polygons are basically vertices, or points in space, that are connected together by edges, and these edges form a face; and the faces form a shape. The representation of these shapes by the faces, edges, and vertices, are called polygons.

(Slide polygon)

Simon: Any shape, no matter how smooth it may seem to be, is made of polygons. An example is this sphere. Smoothness is achieved by increasing the number of faces that compose the polygon. However, no matter how smooth an object may appear to be, nothing is ever completely “round” or “smooth,” since all polygons are composed of flat faces at their base. In this particular example, shading is also applied to create the smoothness and depth.

(slide NURBS)

Simon: For generating and rendering extremely smooth surfaces, a type of mathematical model called a “non-uniform rational B-spline” (NURBS) is used. A spline is a piecewise polynomial function whose purpose is to smooth data by interpolation. When a curve is created, it passes through various “control points” or “control vertices.” Smooth surfaces are possible with NURBS because the curve created is not due to the interpolation of the control points themselves, but an approximation of them. This provides smoothness not possible through polygons, and allows for unlimited control over Level of Detail.

(slide Shadings)

A shading model computes a surface’s characteristics and determines how a surface is shaded. In order for scenes to look realistic, there needs to be shading on the polygons. There are four types of shadings: flat shading, gouraud shading, phong shading, and blinn shading.

(slide Flat Shading)

In order to compute flat shading, you have to find an angle between a polygon’s normal and direction of the light source. Then you shade the polygon with its corresponding color and light source intensity. One of the advantages to flat shading is the computation is quick. However, with low-polygon models there is a faceted look. You can see the edges on every face.

-angle between polygon's surface normal and direction of the light source

-shades each face depending on its color and light intensity

-advantage: fast computation

-disadvantage: faceted look (see edges)

(slide Gouraud Shading)

With gouraud shading, you find the normal vector at each vertex of every face. Then calculate the color at each vertex of every face. By linearly interpolating the color across every face, it creates a smooth surface. But, the highlight has a star shape and the edges appear faceted.

-find normal vector at each vertex

-interpolates colors by averages between vertices across the face

-disadvantage: faceted look still, star-shaped highlight

(slide Phong Shading)

In order to use phong shading, you need to find the normal vectors at each pixel. Since it is computing at every pixel, it takes eight times as long to render than Gouraud shading. Then calculate the color at each pixel and linearly interpolate the color across the surface.

-find normal vector at each pixel

-averages each pixel based on the colors of the adjacent pixels

-interpolates colors across the surface

-disadvantages: 8x long as gouraud to render

-rarely used in games

(slide Phong Reflection Map)

Phong shading also includes a phong reflection model. Since Phong is applied to plastic materials, it has a specular highlight which is included in the lighting equation to create a phong reflection. Specular is a direct reflection of light by a surface. Ambient is a constant illumination on the entre surface. And diffuse simulates the light which penetrates a surface and gets reflected by the surface in all directions.

-reflection model describes how a point on a surface should look

-combining the 3 light elements: diffuse, specular, and ambient

-shading model actually shades the points on the surface

-3 light elements

1. ambient: constant illumination on the entire surface

2. diffuse: hits a surface and gets reflected by the surface in all directions

3. specular: direct reflection of light by a surface

(slide Blinn Shading)

Blinn shading is very similar to the phong reflection model. It trades visual precision for computing efficiency. Instead of calculating the normal vector between the viewer and light source it calculates the halfway vector which is given by this formula. L is the light source and V is the viewer. Then you calculate the angle between the halfway vector and N, the normalized surface normal.

-very similar to the phong reflection model

-trades visual precision for computing efficiency

-calculate half-vector between viewer and light source

-then the angle betwen this and the normalized surface normal

(slide vertex and pixel shaders)

Andrew: Vertex and pixel shaders are in the graphics processing unit (GPU).

(slide pixel shader)

Pixel Shaders

Andrew: A pixel shader serves to manipulate a pixel and apply an effect on an image such as realism, bump mapping, shadows, and explosion effects. It is a graphics function that calculates effects on an individual pixel basis.

Andrew: Individual pixel shading brings out an extraordinary level of surface detail—allowing you to see effects beyond the triangle level. Pixel Shaders then give artists and developers the ability to create per-pixel effects that mirror realism.

(slide vertex shader)

Vertex Shaders

Andrew: A vertex shader is a graphics process used to add special effects to objects in a 3D environment by performing mathematical operations on the objects' vertex data. Each vertex can be defined by its location in a 3D environment using the x-, y-, and z- coordinates. Vertices may also be defined by colors, textures, and lighting characteristics. Vertex Shaders don't actually change the type of data; they simply change the values of the data, so that a vertex emerges with a different color, different textures, or a different position in space.

(Vertex Shader Example – Fluttering Flag, multiple slides)

Andrew: This example shows the manipulation of vertex position to simulate a fluttering flag. For this to work there needs to be a continuously changing input; in this example the changing input will be the angle used to translate the vertices. The first part is to declares some uniform global variables and links within the vertex shaders. We want the shader to output a transformed vertex position in order to plot the vertex within the new image. In the images shown the vertex shader takes into account the horizontal edges of the flag first and then the vertical edges follow. Both are manipulated using various angles to further produce realism. The remaining lines in the shader carry out the standard transformations and pass the texture coordinate back into the pipeline.

(slide bump mapping)

Bump Mapping

Simon: Sometimes, shading and texturing isn’t enough to make certain objects to be rendered realistically. Objects like oranges have a bumpy, and rough surface. Modeling them through polygonal manipulation would be too computationally expensive. An alternate method to simulate bumps on a surface is called Bump Mapping. Whenever a bump appears on a surface, the surface normals are changed, different from flat surface normals. These “bump” surface normals are applied to the flat surface and used in a dot product with unit vectors emitted from a light source, generating the right amount of light at each area of the bump to simulate the existence of a bump.

Displacement Mapping

(slide displacement mapping)

Simon: An extension of Bump Mapping is Displacement Mapping, where areas along the surface are actually displaced. These displaced-height values are usually determined by a texture, or a displacement map.

(slide displacement mapping vs bump)

(slide light maps)

Light Maps

Andrew: A light map is a texture map applied to a material to simulate the effect of a local light source an example of this would be the addition of specular highlights and other luminance textures.

(slide Show Example of the Specular Highlights)

Andrew: Light maps are used to improve the appearance of local light sources without having to recalculate lighting effects on a particular object repeatedly. An example of lightmaps is the game Quake. The game uses light maps to simulate the effects of stationary and moving light sources.

Andrew: Using Light Maps requires a multipass algorithm. A texture simulating the light's effect on the object is created and applied to objects in the scene. Appropriate texture coordinates are generated, and texture transformations can be used to position the light, and create moving or changing light effects. Multiple light sources can be generated with a combination of more complex texture maps and/or more passes to the algorithm.

(Slide On Lights Maps Used Within Quake)

(slide reflection mapping)

Reflection Mapping

Andrew: Reflection mapping is a method of simulating a complex mirroring surface by means of a precomputed texture image. The texture stores the image of the object’s surrounding. There are several ways of storing the surrounding environment in a texture: the most common ones are the Standard Environment Mapping in which a single texture contains the image of the surrounding as a reflection on a mirror ball, or the Cubic Environment Mapping, which will be discussed later.

Ray Tracing vs. Reflection Mapping

Andrew: Reflection Mapping is more efficient than the ray tracing approach of computing the reflection, which is done by shooting a ray and following its exact path optically. However, a drawback of Reflection Mapping is the absence of self reflections; you cannot see any part of the reflected object inside the reflection itself.

Applications in Real Time 3D Graphics

Andrew: Cube mapped reflection is a technique that uses cube mapping to make objects look like they reflect the environment around them. What is produced is not a true reflection since objects around the reflective one will not be seen in the reflection, and the desired effect is usually achieved.

(slide cube mapped reflection)

Andrew: Cube mapped reflection is done by determining the vector coming from the viewers/camera. This ray is reflected about the surface normal of where the camera ray intersects the object. This results in the reflected ray which is then passed to the cube map that maps what is seen by the camera ray. This creates the effect that the object is reflective.

(slide cube mapped reflection)

Andrew: Cube mapped reflection, when used correctly, may be the fastest method of rendering a reflective surface. To increase the speed of rendering, each vertex calculates the position of the reflected ray. Then, the position is interpolated across polygons to which the vertex is attached. This eliminates the need for recalculating every pixel’s reflection.

Anti-Aliasing

(slide jagged picture)

Simon: When a high-resolution signal is being displayed on a lower-resolution display or input device, distortion artifacts appear, such as jagged lines (in place of smooth edges). Anti-aliasing is the method which is used to correct this problem.

(slide supersampling)

Simon: One anti-aliasing approach used on computer games and other programs that generate real-time rendered images is called “supersampling.” Given an area the size of a pixel, multiple points are sampled within the rectangular area of the pixel, and the color the pixel would display is the average color or intensity of the sampled areas.

(slide anti-aliased line)

Simon: As you can see, the edge areas where lines were jagged have been replaced by gray pixels whose intensities have been averaged out between white and black pixels.

(slide motion blur)

Tsoi: Other visual concepts that enhance realism for rendering real-time graphics are Motion Blur and Depth of Field.

(slide Motion Blur)

Tsoi: The same method used to anti-alias images can be used to create a motion blur effect. However instead of rendering images much larger, you render the animations much longer. In this example, the animation was rendered 4 times longer. So you take 4 frames and average the 4 frames. That way the resulting frame is an average of the 4 frames. In this example, it shows 16 frames and on the right it shows the resulting frame with motion blur. The final image is the final 4 frames with motion blur.

-rendering the animations longer instead of rendering images much larger

-example: rendered 4 times as long

-take 4 frames->average the 4 frames->resulting image

(slide depth of field)

Tsoi: Another effect to make the environment more realistic is to add depth of field. One method to do this is post-processing which involves a two-pass rendering. On the first pass, the scene is rendered with two important things-depth value and blurriness factor. On the second pass, a variable-sized filter is passed to blur the image using the blurriness factor from the first pass.

A blurriness factor of 0 - sharp, 1 – blurry

-method: post-processing - 2-pass rendering

-1st pass: render with depth value and blurriness factor

-2nd pass: variable-sized filter to blur image using blurriness factor

-blurriness factor 0-1 (0-sharp,1-blurry)

Tsoi: As with all elements that contribute to making real-time rendering more realistic, implementation of motion blur and depth of field greatly reduces rendering performance.

TYPES OF LIGHTING

(slide Radiosity)

Radiosity is a type of global illumination. On the left, the scene is luminated with direct illumination which calculates the light coming directly from lights. Indirect illumination calculates the light coming from objects. Global illumination, which is shown on the right, is a combination of direct and indirect illumination. All surfaces are perfectly diffuse reflectors.

-direct illumination picture

3 types of lighting

1. spot lighting with shadows (create light shining on floor)

2. ambient lighting (so the room won't be total dark)

3. Omnidirectional lighting w/o shadows (reduce flatness of ambient lighting)

-radiosity

-one source of light

-soft shadows

progressive radiosity algorithm

-takes point on a single surface

-compute form factors between this surface and all other surfaces

(form factor: fraction of energy leaving one surface and arriving at a second surface)

-update radiosity value of those surfaces

-every iteration-same process, different point

(slide HDR lighting)

Simon: A recent boom in graphical technology has led to the development of a lighting technology known as High-Dynamic Range Lighting. The use of HDR lighting, or HDR Rendering, pushes the level of realism of computer-generated scenery to its limits.

(slide HDR lighting)

Simon: During the earlier days of lighting when DirectX released their set of software instructions known as Shader Model 1.0, lighting precision was limited to 8 bits, which meant brightness could only be represented by integers 0 – 255. Also, the calculations were integer-based, so resulting calculations could not represent accurate levels of brightness. When DirectX 9.0 came out with Shader Model 2.0, lighting precision allowed for a maximum of 24 bits, and Shader Model 3.0 allowed for a 32-bit lighting precision as well as floating-point calculations.

Simon: The result of these advancements led to the development of HDR Lighting. High-precision brightness allows for realistic effects like glare and sun flares.

Simon: Graphics processor company NVIDIA summarizes one of High Dynamic Range Lighting’s features in three points.

(slide NVIDIA Points)

Simon: The first point is that bright things can be really bright. The second is that dark things can be really dark. Finally, the third is that details can be seen in both brightness and darkness.

Polygons and Shading

Polygons

Nurbs

Shading

Flat

Find normal vector at each face

Calculate color and light intensity & direction at each face

Shade each face

Gouraud

Find normal vector at each vertex of a face

Calculate color at each vertex of a face

Linearly interpolate color across a face

Phong

Find normal vectors at each pixel

Calculate color at each pixel

Interpolate color across the surface

Blinn

Pixel and Vertex Shaders

Pixel Shaders

Manipulates a pixel to apply a graphical effect on an image

Vertex Shaders

Vertices are defined by colors, textures, and lighting

Vertex shaders don’t change type of value, but change value of data so…

Vertex emerges with a different color, texture, or position

Bump Mapping

Bump Mapping – black & white bitmap or procedural texture to find a perturbed normal at each pixel

Displacement Mapping – Actual pixel and normal are transformed

Realism Effects

Lighting

Reflection Maps

Anti-aliasing

Motion Blur

Depth of Field

Types of Lighting

Radiosity

HDR Lighting

Tsoi - 4

Andrew - 4

Simon – 4

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download