1 SPRITES - WPI



[pic]

Extreme Graphical Simplification

An Major Qualifying Project

Submitted to the faculty of

Worcester Polytechnic Institute

in partial fulfillment of the requirements for the

Degree of Bachelor of Science

by

| |

|John Reynolds |

| |

|Paul Fydenkeves |

| |

|Matt Gage |

| |

|Professor Emmanuel Agu |

|Advisor |

Date: June 16, 2004

Abstract

Complex 3D models usually require more processing power than the average mobile device can provide. A popular technique used in games and real-time rendering is to substitute polygonal representations and meshes with image-based representations such as billboards, sprites, and imposters. Using Billboard Clouds, a recently proposed image-based 3D model simplification algorithm, it is possible for a high-end server to simplify a complex 3D mesh that suits a mobile client’s processing capabilities. This MQP implements the Billboard Cloud algorithm in Java3D.

Table of Contents

1 Introduction 8

2 Literature 10

2.1 Image-Based Rendering 10

2.1.1 Sprites 10

2.1.2 Billboarding 13

2.1.3 Imposters 15

2.1.4 Lens Flare 17

2.1.5 Particle Systems 19

2.1.6 Depth Sprites 19

2.1.7 Hierarchical Image Caching 20

2.1.8 Full-Screen Billboarding 21

2.1.9 Skyboxes 21

2.1.10 Fixed-View Effects 22

2.2 Billboard Clouds 23

2.2.1 What is a Billboard Cloud? 24

2.2.2 Setting up a Billboard Cloud 26

2.2.3 Greedy Optimization 26

2.2.4 Discretization of Plane Space 27

2.2.5 View-Dependent Optimization 28

2.2.5.1 Validity 28

2.2.6 Implementation 29

2.2.6.1 Texture Usage 30

2.3 Java3D 30

2.3.1 Introduction 31

2.3.2 Architecture of Java3D 31

2.3.3 OpenGL vs. DirectX in Java3D 32

2.3.4 Java3D vs. Other Graphical APIs 33

2.4 Mobile Devices 33

2.4.1 Introduction 33

2.4.2 Mobile Phones 34

2.4.2.1 System Specifications 34

2.4.3 Personal Digital Assistants (PDAs) 35

2.4.3.1 System Specifications 35

2.4.3.2 Display 37

2.4.4 Mobile Devices and Graphics 37

3 Methodology 38

3.1 Helper Programs 38

3.1.1 Model Displayer 38

3.2 Algorithms 39

3.2.1 3-D Hough Transforms 39

3.2.2 Projection onto a plane 40

3.2.3 Validity Check 41

3.2.4 Best-Fit Projection Rectangle 43

3.2.5 Spherical Equidistant Separation 45

4 Implementation 48

4.1 Helper Programs 50

4.1.1 Model Displayer 50

4.1.1.1 Function: loadMesh( ) 51

4.2 Algorithms 55

4.2.1 3-D Hough Transforms 55

4.2.1.1 Function: funcHoughTransform( ) – 3 Dimensional 55

4.2.1.2 Function: funcHoughTransform( ) – 2 Dimensional 57

4.2.2 Projection onto a plane 57

4.2.2.1 Function: orthoProject( ) 58

4.2.3 Validity Check 59

4.2.3.1 Function: isValid( ) 59

4.2.4 Best-Fit Projection Rectangle for Set of Points in Plane Space 60

4.2.4.1 Function: brGetRotations( ) 61

4.2.4.2 Function: brRotationX/Z( ) 62

4.2.4.3 Function: brGetXYCoords( ) 63

4.2.4.4 Function: brAreaFromAngle( ) 64

4.2.4.5 Function: brGetBestArea( ) 65

4.2.5 Spherical Equidistant Separation 69

4.2.5.1 Function: makeSphere( ) 69

5 Results 73

5.1 Screenshots 73

5.2 Running Time of Algorithm 78

5.3 Final State of Implementation 79

5.3.1 Shortcomings 79

5.3.1.1 Shooting Textures 80

5.3.1.2 Java/Java3D only allocates 82 Megabytes of Memory 81

5.3.1.3 Image File Compression 83

5.3.1.4 Choosing Billboards 83

5.3.1.5 Liberal use of ArrayList 84

5.3.1.6 Two Coordinate Systems 84

5.3.1.7 Better Model (.ply) Files 85

5.3.2 Achievements 85

5.3.2.1 Get Best-Fit Rectangle Algorithm 85

5.3.2.2 Everything Else 86

6 Conclusion 87

7 References 89

8 Appendix 90

8.1 Instructions on Running Code 90

8.1.1 Setting up the Environment 90

8.1.2 Program Descriptions 90

8.2 Code 93

Table of Figures

Figure 1: Sprites 11

Figure 2: Mobile Sprite 11

Figure 3: Animated Sprite 12

Figure 4: Schaufler's worst case angular discrepancy metric [7] 16

Figure 5: Lens Flare 18

Figure 6: Lens Flare off-center 18

Figure 7: Original 3D Model [6] 25

Figure 8: Optimal Set of Billboards [6] 25

Figure 9: Texture & Transparency Maps for Billboards [6] 25

Figure 10: Billboard Clouds Representation [6] 25

Figure 11: N-Gage Mobile Phone 35

Figure 12: PDA running PalmOS 36

Figure 13: Projection 40

Figure 14: Validity Diagram 42

Figure 15: Points on a plane 43

Figure 16: Finding Rectangle Width 44

Figure 17: Finding Rectangle Height 44

Figure 18: Best-Fit Rectangle 44

Figure 19: Random points on a sphere 45

Figure 20: Sum of all magnitudes and directions of every other point 46

Figure 21: New position of P 46

Figure 22: High-Level Diagram of Implementation 48

Figure 23: Detailed Calculation Diagram 49

Figure 24: Function: loadMesh() (1 of 3) 51

Figure 25: Function: loadMesh() (2 of 3) 53

Figure 26: Function: loadMesh() (3 of 3) 54

Figure 27: Function: funcHoughTransform() – 3D 55

Figure 28: Function: funcHoughTransform() - 2D 57

Figure 29: Function: orthoProject() 58

Figure 30: Function: isValid() 59

Figure 31: Function: brGetRotations() 61

Figure 32: Function: brRotationX/Z() 62

Figure 33: Function: brGetXYCoords() 63

Figure 34: Function: brAreaFromAngle() 64

Figure 35: Function: brGetBestArea() (1 of 5) 65

Figure 36: Function: brGetBestArea() (2 of 5) 66

Figure 37: Function: brGetBestArea() (3 of 5) 67

Figure 38: Function: brGetBestArea() (4 of 5) 68

Figure 39: Function: brGetBestArea() (5 of 5) 69

Figure 40: Function: makeSphere() (1 of 4) 69

Figure 41: Function: makeSphere() (2 of 4) 70

Figure 42: Function: makeSphere() (3 of 4) 71

Figure 43: Function: makeSphere() (4 of 4) 71

Figure 44: Original 3D Bunny 73

Figure 45: Original 3D Dragon 73

Figure 46: Original 3D Bunny with Optimal Billboard Set 74

Figure 47: Original 3D Dragon with Optimal Billboard Set 74

Figure 48: Original 3D Bunny with Affected Faces 74

Figure 49: Original 3D Dragon with Affected Faces 74

Figure 50: Billboard Representation Bunny with Low Error (with Lighting) 75

Figure 51: Billboard Representation Dragon with Low Error (with Lighting) 75

Figure 52: Billboard Representation Bunny with High Error (with Lighting) 76

Figure 53: Billboard Representation Dragon with High Error (with Lighting) 76

Figure 54: Billboard Representation Bunny with Low Error (without Lighting) 77

Figure 55: Billboard Representation Dragon with Low Error (without Lighting) 77

Figure 56: Billboard Representation Bunny with High Error (without Lighting) 77

Figure 57: Billboard Representation Dragon with High Error (without Lighting) 77

Figure 58: Running Time of Algorithm 78

Figure 59: 3 Programs to Run 91

Table of Equations

Equation 1: Theta Screen 16

Equation 2: 3D Hough Transform 40

Equation 3: Projection Formula 41

Equation 4: Constant 41

Equation 5: New position of P 46

Equation 6: Error Bound 47

Introduction

“There are some things in this world that will never change…some things do change.” ~Morpheus

20 years ago when the video game industry was just born, gaming system users jumped at all the new games that were coming out. During that time, video games were restricted to the consoles, systems dedicated to one purpose—to play games. Because the industry was brand new, the hardware in these consoles was very primitive. However, the 2-dimensional games available were a new concept, and a huge hit nonetheless.

20 years later, some things did not change, and some things did. Games are still plentiful, and people never stopped playing them. However, games did evolve—and with this evolution, came the third dimension. Very, very few games that are created in the present day are rendered in two dimensions. Every new game that comes to the market has a huge library of polygons that make up the characters and objects. Of course, games are not the only medium restricted to 3-Dimensional polygons. The movie, business, and other industries also benefit from graphics rendered in 3D space.

Moreover, in the past 20 years, the personal computer has come into existence. While having the ability to replace the gaming console, the personal computer is less bound to the parameters of just games. In fact, it can be applied to any industry.

With such a powerful tool as the personal computer, it only makes sense that the next step in the evolution of technology is to be able to take your personal computer with you anywhere—making it mobile. Laptops of course, have grown quite powerful in that it is possible for a laptop to replace a desktop personal computer. However, other mobile devices have appeared in the last few years that are not meant to replace the personal computer, but to compliment it. Devices such as Personal Digital Assistants (PDAs) were created to enable someone to do small tasks previously done on a personal computer, in an extremely mobile way. The pocket-sized PDAs have extreme mobility, but come at a price of pure computing power.

Since PDAs are relatively new to the market, they have something in common with the console gaming systems of 20 years ago—they can only really handle 2D renderings. So the question is: what if someone wants to view a complex 3D model on their mobile device? One answer is billboard clouds.

Literature

In order to better understand 3-Dimensional graphics rendering, this section will detail the basic ideas behind 3-Dimensional graphics rendering and a few other methods that relate to this topic. Other possible methods to substitute for rendering a 3-Dimensional mesh model are discussed as to how the help with the rendering of objects, as well as how they relate to the Billboard Clouds algorithm.

1 Image-Based Rendering

Image-Based Rendering (IBR) is a technique that displays objects using images rather than polygons. Images are a much more efficient way to render complex models because only the visible pixels in view need to be rendered, rather than all the vertices of a polygon. There is also another advantage of using IBR techniques. Intricate objects such as clouds and fire, are much easier to depict using images whereas it is almost impossible to render these non-solid objects as polygons. [1]

1 Sprites

A sprite is simply a 2-dimensional object on a screen. It usually represents an object within the program for which it was designed. For instance, a sprite could be used to represent a spaceship in a computer game. Figure 1 shows a few examples of sprites used in games:

[pic]

Figure 1: Sprites

A sprite can be stationary, mobile, or animated. A stationary sprite typically sits motionless on the screen. Figure 1-A and C above are sprites that are currently stationary on the screen. The seven sprites in Figure 1-B (the six players and the ball) are also stationary.

The sprite, which is just a picture, can be also be moved around the screen—becoming a mobile sprite. Figure 2 below shows an example of a mobile sprite.

[pic]

Figure 2: Mobile Sprite

In Figure 2-A, the sprite representing the person is on the left side of the screen. In Figure 2-B, the sprite representing the person is more to the right side of the screen. This is an example of a mobile sprite because the same image is represented in different parts of the screen.

A sprite can also be animated. This means that the same object represented in the game, can be switched with another sprite while keeping the position of the sprite the same with the sprite it just replaced.

[pic]

Figure 3: Animated Sprite

Figure 3 shows sprites representing a walking character. Figure 3-A shows a sprite of the character stepping forward. Figure 3-B represents the character in mid-walking motion or even standing still. Figure 3-C shows the character stepping forward now, with the other foot (as apparent because the character’s arms are now swinging opposite from before). All these sprites are switched in and out to simulate that the character is walking.

Sprites are very efficient, take up very little memory, and are easy to manipulate. Sprites are used in almost all of 2-dimensional graphical applications. However, the sprite has begun to be pushed aside in favor of polygonal meshes. Polygonal meshes are 3-dimensional objects that can be rendered once and represent all angles of the object. Whereas a sprite, just being 2-dimensional, would have to represent a different side of the object by switching in a new sprite with every angle change. [1]

Layering a scene using sprites can be an effective tool that saves resources. Each sprite layer is accompanied with a depth. Thus, Z-buffering is eliminated when rendering in a back-to-front order. [1]

QuickTime VR is a technique where a photo-realistic image is rendered on the inside of a cylinder in which the camera is placed in the center. Therefore, when the viewer rotates the camera, the image within view is retrieved and displayed. Using this technique, the viewer has control over how to view the scene. [1]

Lumigraph is similar to QuickTime VR in that instead of rendering a scene, it renders, by storing an image array of almost every possible angle, thus giving the viewer a full representation of the object. [1]

2 Billboarding

Billboarding is a technique where polygons are oriented based on the view direction. As the view changes, the polygon's orientation changes in a few different ways, depending upon which type of billboard it is. When combined with alpha texturing and animation, billboarding can be used for effects such as smoke, fire, fog and explosions among others. For each billboard, two vectors are needed: a surface normal and an up direction. Usually, the normal vector is fixed, but in the case of axially-aligned billboards, the up vector remains fixed while the normal vector is what needs to be moved. Using these two vectors, you can create a rotation matrix for the billboard. [1]

Screen-aligned billboards are the simplest types of billboards to create. For all billboards of this type, the surface normal is the negation of the view plane's normal. The up vector is the same as that of the camera itself. Billboards that use this kind of orientation will always have the same look to the camera. This is useful for things such as annotation text and particles (or other circular sprites) but is pretty much limited to that for realism purposes. [1]

World-Oriented billboards are oriented with a global up vector (i.e. the normal to the "ground" usually). If the normal and up vectors are used to form a rotation matrix that are used for all of the billboards in a scene, you end up with a problem of distortion with billboards that are further away from the view-axis. If the billboards are nearby there can be a problem with realism, so something called viewpoint-oriented billboards are used. To do this, you need to set the billboard's normal vector as a vector from the billboard's position to the viewpoint's position, thus having a unique rotation matrix for every billboard on the scene. [1]

In axial billboards, the up vector of an object is specified, and un-modifiable, while all other vectors are to rotate towards the viewer. The most commonly used example of an axial billboard is that of a tree. While flying around the tree looks normal (although the tree always faces towards the viewer, which might be a bit odd in reality), flying above it and looking down would show artifacts, resembling a cardboard-like cutout. Although trees are the primary cited example of axial billboards, essentially any cylindrically shaped object can easily be represented by an axial billboard, as long as the up vector runs the length of the cylindrical axis. [1]

3 Imposters

An imposter is basically a form of a billboard. The imposter is created by rendering a complex object from the current viewpoint into an image texture, and then mapping the texture onto the billboard. This creation of the object is proportional to the number of pixels the imposter covers on the screen. This is important in helping to create images much faster than by creating the object and modifying the object each time the viewpoint changes or the object moves, because an imposter should be faster to draw than the object it represents, it should resemble the object, and it should be reused for several viewpoints. [1]

Imposters can be used in a few instances of the object or a few frames. They are good for using with collections of small static objects and for rendering distant objects rapidly, which have a few key advantages. An imposter can create a low Level of Detail (LoD) model that does not lose shape or color information and it can also create an out of focus image by using a low-pass filter to help with a depth of field effect. This is an easier method for drawing because the farther the object is away from the viewer, the closer the viewpoints are together, creating a slow moving object. One more way to use imposters is when an object is close to the viewer and tends to show the same side of the object to the viewer. [1]

One problem with imposters is the degree of error in creating them. When creating the imposter the viewer is set to view the center of the bounding box of the object and the imposter polygon is chosen so it points directly at the viewpoint. The imposter size is the smallest rectangle to contain the projected bounding box of the object. The texture of the imposter is then mapped onto the polygon. If the angle created by the viewer moving sideways (ßtrans) becomes too large, the imposter needs to be updated. Also, if the viewer moves too close to the imposter, it needs to be redrawn (ßtrans, ßtex, ßsize). Basically, the imposter needs to be redrawn every time ßtrans, ßsize, or ßtex is greater than ßscr. This is shown in Figure 4 below:

[pic]

Figure 4: Schaufler's worst case angular discrepancy metric [7]

where:

[pic]

Equation 1: Theta Screen

Another method is to determine the largest distance any vertex moves during the entire animation. This distance is then divided by the number of time steps in the animation. If this value multiplied by the number of frames the imposter has been used for is greater than a threshold set by the user, then the imposter needs to be regenerated. [1]

Imposters are important in increasing the speed to create a scene by replacing the creation of the actual objects with imposters. It is good for using with distant objects or collections of dynamic objects. The ability to create and edit these images faster than creating the actual object and redrawing it are a great advantage for use in graphics. [1]

4 Lens Flare

When direct light hits the lens of a camera, the light is supposed to pass through the glass lens; however, this is not always the case. Light can reflect off the surface of the glass, and then may bounce around on the inside of the camera. This is called a lens flare. Lens flares are usually referred to as “mistakes” in the photography world. However, they are regularly used in the computer graphics community due to their aesthetically pleasing effect. [1]

The lens flare is composed of two elements: flare and bloom. The flare consists of a lenticular halo and a ciliary corona. The halo is made up of colored concentric rings where the inside ring is violet and the outside ring is red. The corona is made up of bright rays emitted from the central point of the light source. The bloom is the brightness of the light source. Having a large bloom is usually used to simulate the sun, since the light intensity of a computer monitor has difficulty displaying brightness that powerful. [1]

[pic]

Figure 5: Lens Flare

The image on the left of Figure 5 shows a lens flare with a small bloom while the image on the right has a very large bloom.

[pic]

Figure 6: Lens Flare off-center

The lens flare is usually implemented using a set of sprites affixed to a polygon that is oriented to face the camera, thus creating a billboard. Unlike the flares in Figure 5 where the light source is dead center of the camera, the lens flare is more realistic when it moves with the camera as in Figure 6. The billboards are arranged in a line drawn from the light source through the center of the screen. [1]

5 Particle Systems

A particle system is a collection of individual particles or small objects where each holds any number of attributes that determine the behavior and movement of the particle. Particle systems are created to model things such as fireworks, trees, or even birds. [1]

Implementations of particle systems vary among applications. Billboards and points are the most common. A particle can most obviously be represented as a point on the screen from which attributes such as speed, direction, and color are manipulated through an algorithm. Lines are another popular form of particle representation. In a loop, a line segment can be drawn from the initial location of the particle to its final location. Particles can also be represented as billboards. Each particle is placed on a polygon that is then oriented to the screen. However, if the particle is a point, then there is no need to manage the polygon’s up-vector, rather only its position in space. [1]

6 Depth Sprites

Depth sprites (or nailboards) are born when the texture of an imposter comes together with a depth attribute. Therefore, depth sprites are RGB images with a depth. The advantage of a depth sprite over an imposter is an avoidance of visibility bugs inherent in object collisions. [1]

The nailboard rendering algorithm is accomplished in software because current hardware architecture does not support rendering nailboards at run-time. [1]

7 Hierarchical Image Caching

Hierarchical image caching is an algorithm that arranges imposters in a hierarchy for better performance. Essentially, an entire scene is separated into a Binary Space Paritioning (BSP) tree, where the dividing planes form a series of main-axis-aligned cubes and each cube is then made into an imposter. The algorithms that power the image caching must keep in mind two things: it must try to keep the number of primitives in each box the approximately the same, and it must try to minimize the number of interactions between the dividing planes and objects inside. If the number of boxes that the algorithm produces is too high, then the costs related to creating the imposters will outweigh the benefits of having them. Also, if an object is split into multiple boxes, then cracks in the image may occur due to rounding errors. To avoid this effect, an easy solution is to just make each BSP box slightly larger. [1]

To render a scene using hierarchical image caching, two steps are in order. First off, all objects outside the viewable area (view frustum) are culled out, and any invalid imposters are rendered and propagated to the root of the hierarchy. Secondly, the imposters that are inside of the view frustum are all rendered from back to front. Because this process requires such a large amount of pre-calculation for the BSP tree, it is infeasible to have dynamically changing objects rendered using this method. One possible idea for a solution however is to use depth sprites to avoid visibility and depth problems. [1]

8 Full-Screen Billboarding

Full-screen billboarding is often used for different effects such as night-vision goggles, and flash effects. Essentially, this technique involves setting a billboard facing the viewer at all times, and taking up the entire view screen. For the two above effects to work for example, you could have a green-alpha-blended billboard for the night-vision, and you could have a white billboard that increases and decreases rapidly in opacity for a flash effect. Also, to simulate an environment, billboards can be placed behind all the objects in a scene. For this to work correctly though, the 'u' and 'v' coordinates of the environment texture need to rotate with respect to the camera, so that the illusion of animation is maintained. [1]

9 Skyboxes

Skyboxes are an interesting concept similar to that of environmental billboarding. Essentially a cube is used to enclose all objects in the scene and the cube is centered on the viewer in the scene. This is especially useful in cases such as star fields, and sky textures, because they will never change appearance no matter where a person moves in relation to the scene. Although this technique has many benefits, there are occasionally problems with seams appearing in the cubic map. One way of avoiding the problem of seams is to create the cube from six slightly larger squares that overlap each other. However, the problem with this technique is that the textures cannot easily be used as a cubic environment map, due to the extra pixels on the edges. [1]

10 Fixed-View Effects

When drawing a scene, the view is in a fixed position, giving only one location and orientation of the environment. This is common in some types of games and training software. This can be a benefit when trying to increase the speed of rendering a scene, in that a few techniques can be applied to draw images in different ways instead of rendering each object for every frame. For instance, a static scene can be photographed, drawn, or computed in advance. In order not to lose any quality, depth mapping can be used. This involves storing the depth at which an object is currently set at, and then comparing this value to other objects in the scene to give the correct perception. Also, this scene can be panned, which means that it can be made larger than the visible screen itself (i.e. zooming). Varying the shading of the pixel can help with the illusion of giving the object depth within the image. Simply by editing the lighting the local lighting, the rendering speed can be sped up by up to 95 times faster than re-rendering the objects with each frame. Finally, the last idea to help this situation is a concept called the golden thread or adaptive refinement, which in a static scene can enable the computer to produce a better image as time goes on. The images can be swapped abruptly or blended into the scene over time. [1]

The ability to use pixel shading has helped tremendously to speed up the rendering of images. They allow a user to sample a texture and blur it, remap it to grayscale or heat signatures, perform edge detection, and many other options. It works by mapping an image to a screen-aligned quadrilateral with a size so that it will render each texel on a pixel. Using a pixel shader, it is sampled multiple times and combined to create an average and result in a better looking image, instead of just sampling it once. Some examples of combined pixel shaders are the images of smoke, fire, and rippling water. [1]

Another form of rendering is volume rendering, which is data represented by voxels, or volumetric pixels. They are representations of a regular volume of space. An example of this is an MRI image. It takes x by y by z amount of voxels to create a three dimensional image. This method is used in medicine, oil prospecting, and can also create photo realistic images. Voxels can be used as a set of two dimensional image slices and then shearing and warping them to create the resulting image, or it can be used in a method called splatting. Splatting is when a surface or volume can be represented by screen aligned geometry or sprites rendered together to form a surface. Other methods are rendering volume slices as textured quadrilaterals and volumetric textures. These are volume descriptions represented by layers of two dimensional, semitransparent textures. Which are good for creating complex surfaces, such as landscape details, organic tissues, and fuzzy or hairy objects. [1]

2 Billboard Clouds

As time passes, the human need for expansion across all forms of technology causes a constant demand for new and better applications. Each time a new groundbreaking product or application reaches the market, very shortly after do the demands for even more ground-breaking emerge. This is apparent across all forms of technology, and is not any different with computer graphics. Once the eye-popping graphics and special effects of a new game or movie wears off, the public then demands even more realism in the next generation of entertainment platforms. [2]

One of the main factors that play an important role in the advancement of graphics is hardware. Hardware assistance greatly improves the rendering speed of graphics engines. Presently, graphics software performance—and the human imagination—exceed that of hardware performance. Therefore, in order to pacify the ever increasing human thirst for realism, software must find shortcuts in order to keep the requirements of the software on the same level as hardware technology. [2]

The aspirations for creating a realistic environment require rendering a number of virtual polygons that would surpass the capacity of today’s hardware technology. The billboard clouds application is a method of creating that realistic environment without the cost of rendering a huge amount of polygons. [2]

1 What is a Billboard Cloud?

A billboard cloud is “a set of textured, partially transparent polygons, with independent size, orientation and texture resolution” [2]. Basically, a billboard is a set of pictures that are oriented in such as way as to create the appearance of a 3-dimensional object. Therefore, since the object is made up of pictures, the visual representation of the object can be complex or simple with a negligible change in rendering cost. [2]

|[pic] |[pic] |

|Figure 7: Original 3D Model [6] |Figure 8: Optimal Set of Billboards [6] |

|[pic] |[pic] |

|Figure 9: Texture & Transparency Maps for Billboards [6] |Figure 10: Billboard Clouds Representation [6] |

2 3 Setting up a Billboard Cloud

The protocol of setting up a billboard cloud is simply taking a textured 3-dimensional virtual model and transforming it into a billboard cloud. There are two functions for a billboard cloud: an error function and a cost function. [2]

In most approaches to new graphics rendering techniques, there are two methods to rendering simplification: budget-based simplification and error-based simplification. In Budget-based simplification, a maximum cost is given that represents hardware demands for rendering—such as number of billboards in an object, and the number of textures used—and the billboard cloud is constructed that minimizes error while not exceeding the maximum cost. In error-based simplification, a maximum error is set and a billboard cloud is constructed that minimizes rendering cost and does not exceed the error limit. [2]

4 Greedy Optimization

The Billboard Clouds algorithm takes as input the specified error metric value and automatically generates an optimal set of billboards planes. One option is a greedy algorithm. Because greedy algorithms are so focused on the local optimal values, they can often miss values that are will cause problems down the line. In the case of the Billboard Clouds algorithm, that means that it might pick a certain group of planes, and leave out ones that are near because they do not quite have the same tangent as the billboard. Thus, a penalty system was created that accounted for near-tangency issues in the billboard selection algorithm. The Billboard Clouds algorithm uses penalty, contribution, and validity. The penalty is similar in formula to that of the contribution, but it is weighed more strongly. Instead of taking a group valid(P), the penalty formula sums the area of a miss(P) group defined as the group of planes that fall within the area that is one ( away from the billboard normal plane in question. [2]

5 Discretization of Plane Space

Using something known as a Hough transform, a plane can be converted into a 3D function in plane space. The coordinates for this are ((,(,() which if represented as a 3D graph would represent the (x, y, z) dimensions respectively. In plane space, each plane is defined as a sheet following a function ( = f((,(). To discretize this plane space, we divide it into “bins” (the respective equivalent to a “cube” in 3D space). Each bin will have a separate value, depending upon how many faces intersect at all with the bin. [2]

After giving each bin in space a discrete value, the model is now optimized with a greedy algorithm. To do so, the bin with the highest density is found, and a place in the bin that can collapse the set of faces contained within. Because a bin’s density does not necessarily mean that all of the planes inside intercept, the algorithm has to refine the bin until it reaches a state where all of the planes inside of the bin intercept. To help choose which bin has the highest density, it and all of its neighbors (imagine the 27 sub-cubes of a rubix-cube-type setup) are subdivided into 8 pieces each (i.e. split ½ way on each axis x, y, z). After a sub-bin has been chosen, the process is repeated until a bin is found with a full plane intersected inside, which then has its faces collapsed, and the algorithm restarts. [2]

During the greedy collapse phase, each plane P is assigned a set of faces that have collapsed into it. To shoot a texture onto the plane, the minimum bounding rectangle is first found, and then the texture is shot by rendering the faces using orthographic projection. During this process, it is important to also keep the alpha and mip-mapping data consistent throughout the whole process. [2]

6 View-Dependent Optimization

The billboard cloud construction can be applied to a view-dependent case to minimize the error in the volumetric view-cell. This useful image-based acceleration uses a reference viewpoint in the view-cell to compute the textures and to define the validity. A billboard is built so the view of the cloud is only correct from the reference viewpoint. This yields a billboard cloud with better consistency and one that is less likely to crack. [2]

1 Validity

The validity is calculated from an algorithm that uses the reference viewpoint V, a vertex M, a point P on the line created with VM, and another viewpoint T. V is selected as the center of the view-cell, as this minimizes the reprojection error, and the viewpoint M chosen some distance from the view-cell. When M is simplified along the line VM to a point P, the reprojection error for another viewpoint T in the view-cell can be defined as the angle under which the line MP can be seen from T. This error is at a maximum when the line MT is tangential to the view-cell. This algorithm defines the distance where M can be translated along the line P- to P+ for an error less than (max. To get the pixel distance in the image space of this angle (max another algorithm is used. This algorithm uses the distance of the near plane of the camera n, the width of the screen w (in world units), and the width of the viewport W (in pixels). The final portion of this is the contribution, which is defined as the projected area in a reference view from V. [2]

7 Implementation

These validity domains are calculated and then use the greedy optimization to find a set of planes. Then the textures are shot from the reference viewpoint. This texture is then stored with the transformation matrix and will be used as a texture matrix to render the billboard cloud. [2]

The only inputs for the implementation of this algorithm are the error threshold and the maximum texture resolution in object space. This always renders a billboard cloud with all original faces collapsed on a billboard in a very robust manner in regards to such things as the resolution of the discrete plane space. This method also has a great benefit since the notion of texture compacity and texture compression can usually reduce memory requirements by a factor of four to eight. The complexity of this method is basically O(kn) where n is the size of the input and k is the number of planes in the billboard cloud, obtained from the density computation being O(n) and each iteration costing O(n) as well. Using a random subset of faces could accelerate the density computation; although the density is only a guide for plane selection therefore it should not affect the behavior of this method. [2]

1 Texture Usage

These previous methods did not take into account texture size. One way to do this is to include a compactness term in the density definition and to restrict the primitives collapsed on a plane to maximal compact sets in the greedy construction algorithm. In order to do this the bounding box of valid(((i) is stored with each bin Bi of plane space in order to compute an irregularity term proportional to the squared perimeter divided by the area. The greedy algorithm is modified and restricts the valid((() to a maximal compact subset. The issues with compactness are mostly caused by different parts of the scene being collapsed on the same plane. This can be overcome by analyzing the validity just mentioned into clusters and only use the densest cluster. “An aid in this task is to use the bucket-like partitioning algorithm by Cazals et al. [8], with an appropriate measure of compactness. The rest of the algorithm is left unchanged, but this can lead to one plane being selected many times to handle different clusters.” [2]

3 Java3D

The Billboard Clouds algorithm will be implemented in Java3D. This API was chosen due to Java’s ability to be portable on many platforms. The use of the Java Virtual Machine allows for this ability which will lead to the use of the algorithm on different devices such as PDAs, cell phones, and other mobile devices.

1 Introduction

Java 3D is an API for developing three dimensional graphics applications in the Java programming language. It was designed with the intention of having high performance 3D graphics. This new cross platform API allows for quick development of complex 3D objects. It also permits fast and efficient implementations over a variety of platforms. 3D content can be loaded from VRML, OBJ, and other external files to aid in the creation of scenes. A rich feature set is included for building shapes, composing behavior, interacting with the user, and controlling rendering details. Although, even with the benefits of using this API, the future of Java3D is uncertain and has been put on hold as of now.

2 Architecture of Java3D

Java3D has implemented its own thread scheduler, as the java thread model does not allow for enough control of thread scheduling. This new scheduler allows for total control over all of the threads used, so thread priorities are not needed. Messages are used in the system to generate scene graph changes into structures that optimize a particular function.

Geometry objects have two structures that are used. One structure organizes the geometry spacially, used for special queries, in the scene. Each virtual universe has a geometry structure. The other is the render bin. A render bin is associated with every view. The renderer threads use this by state sorting all geometries for rendering.

Behavior structures spacially organize behavior nodes. These structures are used by the behavior scheduler threads. The scheduler threads execute the behaviors that need to be completed.

Rendering is the most important part of this system. It is generally the most expensive part of graphics, the goal of this system is to be rendering at all times from a thread scheduling point of view. These structures fit together through the thread scheduler. This scheduler is basically an infinite loop with each iteration running each thread it wishes to once. This creates one frame and runs one behavior scheduler for each loop. The scheduler waits for each thread to complete in the iteration before moving on to the next iteration.

A message is created each time the scene is changed. This message contains a time value as well as a state value if needed to associate with the scene change. The message is then queued by the thread scheduler and at each iteration the messages are processed and the necessary structures are updated. Most of these messages are easy to process requiring very little work, but occasionally there will be a difficult process that requires more time to process. With this ability to start processing the messages and rendering the frames right away, Java3D can hit native graphics speeds in most cases. The difficult messages, requiring a great deal of change or complex changes in the scene, slow the process, and start to degrade the rendering time.

3 OpenGL vs. DirectX in Java3D

Java3D can be used with either OpenGL or DirectX. When running on an operating system, DirectX can only be used on Windows. OpenGL is a more feature rich implementation of Java3D than the version using DirectX. The newer version of Java3D now uses DirectX version 8.0. With OpenGL being platform independent, Java has put a little more effort into the implementation and therefore produces about thirty to fifty percent better frame rates. Although OpenGL seems to be the better choice due to the amount of effort Java put into its implementation, the DirectX version seems to be a little more stable. The DirectX version seems to have fewer bugs in testing a major application.

4 Java3D vs. Other Graphical APIs

Java3D is similar to the other graphics programming APIs, like OpenGL, Direct3D, and PHIGS. Java3D is designed to use hardware acceleration when available, as much as possible based on the underlying graphics architecture of the operating system. This API supplies Java with the ability to render three dimensional objects, but at the same time use OpenGL or DirectX indirectly interface to the hardware. Java3D does not require direct hardware device driver support, as it can rely on other APIs for this.

4 Mobile Devices

1 Introduction

As in many computing devices, mobile devices have a central processor, memory, display, and other components similar to desktop computers. However, the size of these components, both in physical size and power, are greatly reduced. These devices are designed for extreme mobility, and therefore must sacrifice size and power. Mobile phones are mainly used for making phone calls, but also have many of the same basic features as a computer, including a display—and with a display, comes an urge to display graphics on it. Personal Digital Assitants (PDAs) are not meant to replace computers, but rather be a companion or “assistant” to a desktop computer. With the invention of these devices comes the desire to do more with them—this paper outlines a start to the journey of displaying complex 3D models on a mobile device.

2 Mobile Phones

1 System Specifications

Mobile phones are mostly used for making phone calls while on the move. Early mobile phones came with just enough processing power to control simple calling functions and the monotone display. Today, many mobile phones now have full color displays, as well as handling more complicated functions such as calendars, notepads, and even surf the internet. Most recently, phones have even appeared that can handle some 3D models, such as the N-Gage (no relation to one of the members of this project).

[pic]

Figure 11: N-Gage Mobile Phone

The N-Gage phone is actually a mobile device not exactly intended for the implementation of this project. Being already able to render complex 3D models, it actually has enough power to accomplish what it wants too. Granted the N-Gage cannot render an extremely large number of polygons, but it can handle way more than the average mobile phone or PDA. The Billboard Clouds algorithm is mostly intended for devices that have almost no ability to render a 3D model. However, for the N-Gage, the algorithm could still be applied if the number of polygons were very high.

3 Personal Digital Assistants (PDAs)

1 System Specifications

As in many computing systems, Personal Digital Assistants also have a central processor that carry out tasks and processes. However, the speed of the processor on the average PDA is much slower than that of an average personal computer. The fastest PDAs only have a couple hundred MHz of processing power—compared to thousands of MHz on desktop computers. This makes sense, since a PDA is not meant to take the place of a desktop or even laptop computer. Extreme mobility and size of the device take precedence over functionality in this case. A PDA is meant to be a companion, or “assistant” to a personal computer. PDAs could be divided into two groups, one group have a faster processor (and other components) than the other. The faster PDA could run a Microsoft Windows’ based operating system, where as the slower PDA is most likely running a Palm operating system specifically designed for PDAs. More detail about PDA operating systems now follows.

[pic]

Figure 12: PDA running PalmOS

There are mainly two distinct types of operating systems for PDAs: Palm OS and PocketPC. Palm OS runs on PDAs created by Palm and require less memory than PocketPC. Palm OS is designed to be smooth, efficient, fast, and easy to use—designed specifically for PDAs. PocketPC, which was formally called WindowsCE, is created by Microsoft as an OS for PDAs. This OS is made more to act like the OS of a desktop computer. It requires more memory and can be a little more cumbersome to use. PocketPC supports more of the entertainment features such as color, sound, and graphics. It would therefore be easier to display graphics on a PDA running PocketPC both because of the better support by the OS, and also because the PDA running PocketPC is more likely faster.

Memory works a little differently in a PDA than in a desktop computer. A PDA does not have any of the complicated moving parts of a pc hard drive, but rather, stores all of its applications on two types of main memory. The default applications such as the operating system and address book are stored on a read-only, non-volatile chip. Any user created/installed applications or data is stored on the PDA’s volatile chip that can be lost if the batteries run out. Because all of the applications are stored on main memory, all of them are accessible all the time—with no need for the OS to fetch the data off a secondary storage drive. Some slower PDAs (again, many running PalmOS) may only have an average of 8 MB to store data where as the faster PDAs (running PocketPC), can have as much as 32 MB of space.

2 Display

PDAs have become successful because of their small size and extreme mobility. Therefore, the viewing display of the device cannot be large enough as to defeat the purpose of being able to fit the PDA into a pocket. PDAs have a very small screen to work with and thus, make it difficult to show extensive graphics. Pixel resolutions barely exceed 320 pixels. Colors can range from grayscale (most likely on the Palm devices) to thousands of colors (on the PocketPC supporting devices).

4 Mobile Devices and Graphics

Displaying graphics on mobile devices are very difficult to accomplish given the limited resources these devices have access to. However as time passes, engineers will only find new ways to make PDA components better—making it much easier to display graphics. Until then, it is extremely difficult to adequately represent complex graphics such as 3D models on a PDA without some sort of extreme simplification algorithm.

Methodology

This section will outline the steps in order to implement the Billboard Clouds algorithm in Java3D. Each of the following sections will describe a portion of the algorithm itself to be implemented or some needed functionality to add to the algorithm in order to complete the implementation of the algorithm itself.

1 Helper Programs

There are certain things that are not present in the billboard clouds algorithm that are necessary in order to re-implement the algorithm. The algorithm itself only executes the needed equations to create the billboards for the objects, but does not do much else on its own. There is some work that needs to be done to give the algorithm the necessary information. This is why there are several other functions that needed to be added to the program in order to use the algorithm.

1 Model Displayer

The first step in the re-implementation of the billboard clouds algorithm is to be able to load an object into the program to change to billboard clouds. The file format of a polygonal object to be drawn with Java3D is a ply file. This file consists of a portion of header information, a list of vertices, and a list of faces. The list of vertices contains the values for the x-coordinate, y-coordinate, z-coordinate, the confidence, and the intensity. The list of faces contains the amount of vertices in the face and a reference to each of these values in the list of vertices from the file itself. Obtaining these values are the first and most basic step in trying to implement the billboard clouds algorithm.

This function needs to be able to read in ply file and extract this data for the algorithm to use. This step is not included in the actual algorithm itself. This will be implemented as an added pre-process helper function in order to get the necessary data to the algorithm for execution.

2 Algorithms

This section describes the different portions of the Billboard Clouds algorithm that will be implemented. Each smaller algorithm that makes up the entire Billboard Clouds algorithm is detailed here as to how each one works and how it will be implemented.

1 3-D Hough Transforms

3-D Hough Transforms are used in the billboard clouds algorithm to help with the checking for validity. In order for this to be useful for checking the validity, the values for the validity formula need to be found. This is where the 3-D Hough Transform comes into play. The validity formula needs a value r, which represents the distance from a point to a plane. The Hough Transform finds this value. If a 3-D space is defined by the parameters r, θ, and φ, than any plane in x, y, z space corresponds to a point in this space defined by r, θ, and φ. As a result of this, the 3-D Hough Transform of a plane in x, y, z space is a point in that space defined by r, θ, and φ. The values r, θ, and φ define a vector form the origin to the nearest point on the plane. This vector is also perpendicular to the plane.

Now, given a point, x, y, z, there are many different planes that intersect this point. All of these planes are expressed using the following equation:

r = xcos(θ) ∙ cos(φ) + ysin(θ) ∙ cos(φ) + zsin(θ)

Equation 2: 3D Hough Transform

All of these planes can be expressed with a curved surface using the values in r, θ, φ space. These values can be calculated for a series of 3-D points. The results of these calculations can be plotted in a 3-D histogram to find the best fitting plane for the given point.

2 Projection onto a plane

Given a point and a plane, there is a simple formula that will project, or cast an image of that point onto the plane shown in Figure 13 below,

[pic]

Figure 13: Projection

Projecting a point onto a plane is a result of the following equation:

P' = P − CN

Equation 3: Projection Formula

P is the original position of the point that is to be projected. N is the normal of the plane, and C is a constant. C corresponds to the distance P must travel in order to intersect itself with the plane. C is a constant that is calculated in the following equation:

[pic]

Equation 4: Constant

3 Validity Check

As validity has already been defined within the billboard clouds algorithm (see section 2.2.5.1 ), the methods to implement this will now be discussed. As mentioned, to calculate validity an error metric needs to be set as well as the values for the formula. These values will be obtained using the 3-D Hough transform. Each value from the Hough transform, r, will be found and then the calculation for validity will be conducted. As mentioned in the description of the 3-D Hough transform, the values for r will be plotted in a histogram format.

This process to find the best plane for the given set of points will be done by checking each set of points’ values for r against each other to find the smallest difference between the maximum value of r and the minimum value of r. The values of r will be calculated for every degree to find the best values for the plane to use. There will be a list of values of r for each point ranging from being calculated at zero degrees all the way to ninety degrees. When this is completed for every set of points, the values of r for the x component, the y component, and the z component having the highest and lowest values will give a value to compare to the error metric. The lowest value will be subtracted from the highest value of r, no matter which component it is, and this value will be compared to the error metric to see if it is valid. Once a fit is found, this will not be the end of the test for validity. The test will continue to iterate to see if there is a better fit value. Therefore, all of the values that have been calculated using the 3-D Hough transform will be tested. If a better fit is found, this selection will be saved until there is another plane that is a better fit, or the test ends. Once this is finished, the best fit plane to create the billboard for the given face will be known. Figure 14 below illustrates this further:

[pic]

Figure 14: Validity Diagram

4 Best-Fit Projection Rectangle

The purpose of this algorithm is to find the best-fit rectangle that encompasses all the points on the plane. Given a plane of points, the goal is to calculate the dimensions of a rectangle that contain every point with the smallest area.

[pic]

Figure 15: Points on a plane

Figure 15 shows a sphere of points aligned at the origin. The next step is to increment an angle, θ, originating from the origin and along the x-axis. Along this line, a perpendicular line is drawn that intersects with both the furthest point and the closest point from the origin as shown in Figure 16. This will determine the width of the rectangle. Then, the same technique is used for the angle θ + 90° which would in turn, find the limits of the height of the rectangle as shown in Figure 17. Therefore, Figure 18 shows that the smallest rectangle has been found.

|[pic] |[pic] |

|Figure 16: Finding Rectangle Width |Figure 17: Finding Rectangle Height |

[pic]

Figure 18: Best-Fit Rectangle

Then, θ is incremented and the steps are repeated for this new angle. The angle, θ, is incremented from 0° to 90° at which point the rectangle with the smallest possible area will be determined.

5 Spherical Equidistant Separation

An algorithm was developed to create the equidistant separation among points on a spherical surface. The most common use of this algorithm is in the placing of dimples on the surface of a golf ball. The task is for every point to be equal distances from each point closest to it. Contrary to the algorithm used in the original implementation of this algorithm by Décoret, et al. [2], this procedure produces an equal sampling rate in all directions.

The algorithm is given a set of points scattered randomly around the surface of a sphere as in Figure 19:

[pic]

Figure 19: Random points on a sphere

The algorithm begins by choosing any point P on the surface of the sphere. For this point, the sum of the magnitude and direction of every other point (P1, P2, …, Pn) on the sphere is calculated to create a total magnitude and direction, M, as shown in Figure 20.

[pic]

Figure 20: Sum of all magnitudes and directions of every other point

P is then translated to a new position that is the result of the normalization of its current position, and the total magnitude and direction previously calculated. This is shown in Equation 5 and depicted graphically in Figure 21 below:

Pnew = normalize(Pold + M)

Equation 5: New position of P

[pic]

Figure 21: New position of P

The algorithm then chooses another point, and loops through again until all points have been translated. It must be taken into account, however, that with the translation of each point, its new location then affects every other point that once used it in their own calculations. So, once each point has traversed through the loop once, they must go through yet again, making calculations on their positions due to the change in the magnitude and direction of all the other points on the sphere.

The loop is traversed as many times as needed until the error bound is reached. The error bound is a value at which if the distance of the point at its new position, Pnew, is subtracted from its original position, Pold, does not exceed the error bound, e, then the point does not need to traverse the loop again:

|Pnew − Pold > e |( |Traverse P through the loop |

|Pnew − Pold ≤ e |( |P may exit the loop |

Equation 6: Error Bound

Implementation

The actual implementation of the code will be explained in this section. Each section mentioned in the methodology (Section 3 starting on page 38) will be discussed in detail as to how it was coded in Java3D.

[pic]

Figure 22: High-Level Diagram of Implementation

Figure 22 is a very general diagram of the structure of our code. The algorithm takes in a 3D mesh model and runs it through a rigorous set of calculations that determine the optimal set of billboards to represent the model. With this, the final output is to compile the billboards and images of the model together to create a simplified version of the original 3D mesh model.

[pic]

Figure 23: Detailed Calculation Diagram

Figure 23 above is a demonstration of how the calculation of the billboards actually works broken down into steps. The algorithm tests an equal number of points created from the equidistant spherical separation function in the algorithm. This assures an equal distribution of points to be tested in the algorithm. Then, the best angle for each billboard is calculated and the valid faces for each billboard are calculated using the validity test and the contribution test. This is to ensure that the best selections of faces are chosen for the billboards to contain in the final product. These faces that are chosen to be projected onto the billboard from the calculation are used to shape the billboard and create the best fit for the set faces on the billboard. The faces that have been selected for the billboard are then put into image files by taking snapshots of only these faces. Finally, the billboard and the image files are combined to create the final billboard to be used. The image files containing the faces to be used are projected onto the billboard and this is the final product. This is an example of one step of the algorithm to be looped for each billboard in order to create the entire 3-dimensional simplified object.

1 Helper Programs

The creation of the helper function to help display the model was an original creation that was not included in the Billboard Clouds algorithm. The implementation behind this function will be discussed in the following section.

1 Model Displayer

The first part of actually starting to implement the billboard clouds algorithm began with creating a helper function to give the algorithm the necessary data to execute all of the calculations. The most important of this needed information is to load the data from the polygonal object file or ply file.

1 Function: loadMesh( )

|1 |public Group loadMesh(String loc, Color3f clr, float scale) { |

| |... |

|3 |while((str=fl.readLine()) != null) { |

|4 |switch(state) { |

|5 |case 0: if(str.indexOf("ply") != 0) { |

|6 |state=-1; |

|7 |} else { |

|8 |state++; |

|9 |} |

|10 |break; |

|11 |case 1: |

|12 |if( str.indexOf("end_header") == 0) { |

|13 |state++; |

|14 |fl.mark(500); |

|15 |} |

|16 |if( str.indexOf("element") == 0) { |

|17 |str2 = str.substring(str.lastIndexOf( |

|18 |' ')+1); |

| |if(str.indexOf("vertex") >= 0) { |

|19 |num_ver = Integer.parseInt(str2); |

|21 |} |

|21 |if(str.indexOf("face") >= 0) { |

|22 |num_face = Intger.parseInt(str2); |

|23 |theFaces = |

|24 |new TrangleAray(num_face*3, |

| |TriangleArray.COORDINATES | |

| |TriangleArray.NORMALS); |

| |} |

|25 |} |

|26 |break; |

|27 |... |

|Parameters |

|String loc : A string that represents the path and filename to the .ply file. |

|Color3f clr : What the color of the model will be. |

|Float scale : The rate of resizing the points in the model—this is used to reduce error when using very small numbers. |

|Returns |

|TransformGroup tg : A group for holding the final model object that can be applied to various transforms. |

Figure 24: Function: loadMesh() (1 of 3)

The basic idea behind the loadMesh function shown in Figure 24 through Figure 26 is to read in the ply file and extract the needed data. The steps to complete this process consist of parsing through the data in the file, and then storing the list of vertices and the list of faces for the object. This also entails obtaining the important header information from the file, such as the number of vertices and faces as well as checking to see if it is the correct type of file.

This first block of code shown in Figure 24 conducts the initial steps of the helper function. This code checks to see if the file be read in is in fact a .ply file, as can be seen in line 6. The file is read in line by line, and a series of if statements and a case statement are used to check the line that is being read in. When the type of file is checked, the case statement breaks if it is the wrong type of file (lines 6,7). Otherwise the program proceeds to the next step. At this point, the amount of vertices and faces are extracted from the header information. Lines 19-21 check for the keywords for the vertex count and stores this number into the variable num_ver. Lines 22-25 search for the keywords for the count of the faces and stores this into the variable num_face and creates a new triangle array with this total amount of faces multiplied three times, as each face needs three vertices.

| |... |

|28 |case 2: |

|29 |theVers = new Point3f[num_ver]; |

|30 | |

|31 |fl.reset(); |

|32 |for(i=0; i= ct2) break; |

|pts = model.getVers(arr2[max].fnum); |

|if(isValid(c.theta, c.phi, arr2[j].height, pts, ERROR)) { |

|val += arr2[max].contrib; |

|} else { |

|break; |

|} |

|max++; |

|if(maxp < max) maxp=max; |

|} |

|//increment the Maximum Penalty sliding window |

|while(true) { |

|if(maxp >= ct2) break; |

|pts = model.getVers(arr2[maxp].fnum); |

|if(!isPenalty(c.theta, c.phi, arr2[j].height, pts, ERROR)) break; |

|maxp++; |

|} |

|//increment the Minimum Penalty sliding window |

|while(true) { |

|if(minp >= ct2) break; |

|pts = model.getVers(arr2[minp].fnum); |

|if(isPenalty(c.theta, c.phi, arr2[j].height, pts, ERROR)) break; |

|minp++; |

|if(minp >= min) { |

|minp = min; |

|break; |

|} |

|} |

| |

|finfo = arr2[j]; |

|finfo.min = min; |

|finfo.max = max; |

|finfo.tcontrib = val; |

| |

|//If there's a new max, set all the appropriate information |

|if(finfo.tcontrib > maxv) { |

|System.out.println("New Max: "+finfo.tcontrib+", num="+(maxp- minp)+", ["+Functions.radToDeg(c.theta)+", "+Functions.radToDeg(c.phi)+", |

|"+finfo.height+"]"); |

|maxv = finfo.tcontrib; |

| |

|//now that we've got the indices for the sliding windows, just //iterate over all of them |

|//to find the contributing faces. |

|Structs.FaceContribList.FaceInfo contriblist[] = new Structs.FaceContribList.FaceInfo[max-min]; |

|Structs.FaceContribList.FaceInfo plist[] = new Structs.FaceContribList.FaceInfo[(maxp-max)+(min-minp)]; |

|for(k=min; k= hi) return; |

|else if( lo == hi - 1 ) { |

|if (a[lo].height > a[hi].height) { T = a[lo]; a[lo] = a[hi]; a[hi] = T; } |

|return; |

|} |

| |

|pivot = a[(lo + hi) / 2]; |

|a[(lo + hi) / 2] = a[hi]; |

|a[hi] = pivot; |

| |

|while( lo < hi ) { |

|while (a[lo].height ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download