AN UNEQUAL ERROR PROTECTION METHOD FOR PROGRESSIVELY ...

AN UNEQUAL ERROR PROTECTION METHOD FOR PROGRESSIVELY COMPRESSED 3-D MESHES

Ghassan Al-Regib , Yucel Altunbasak and Jarek Rossignac

?

Center for Signal and Image Processing

Georgia Institute of Technology

Atlanta, GA 30332-0250

?

E-mail: gregib, yucel @ece.gatech.edu ?

?

?

Graphics, Visualization and Usability Center

Georgia Institute of Technology

Atlanta, GA 30332-0280

E-mail: jarek@cc.gatech.edu

ABSTRACT

In this paper, we present a packet-loss resilient 3-D graphics transmission system that is scalable with respect to both channel bandwidth and channel error characteristics. The algorithm trades off source coding efficiency for increased bit-stream error resilience to optimize the decoded mesh quality on the client side. It uses the Compressed Progressive Mesh (CPM) algorithm to generate a hierarchical bit-stream representing different levels of details (LODs). We assign optimal forward error correction (FEC) code rates to protect different parts of the bit-stream differently. These optimal FEC code rates are determined theoretically via a distortion function that accounts for: the channel packet loss rate, the nature of the encoded 3-D mesh and the error protection bit-budget. We present experimental results, which show that with our unequal error protection (UEP) optimal approach, the decoded mesh quality degrades more gracefully (compared to either no error protection (NEP) or equal error protection (EEP) methods) as the packet loss rate increases.

1. INTRODUCTION

An increasing number of Internet applications utilize highly detailed 3-D meshes, giving rise to a large amount of data to be stored, transmitted, and rendered within a limited time frame. To alleviate these limitations, several single-layer compression techniques have been developed [1, 2]. However, the resulting compressed mesh still requires a significant amount of time to be completely downloaded before it can be displayed on the client's screen. To reduce this time, progressive compression techniques have been designed so that a coarse model is downloaded first and then refinement information is transmitted later [3].

Even though compression techniques reduce the number of bits transmitted, they do not account for major factors that affect the decoded mesh quality. Among these factors is packet loss. In a typical packet-switched network (e.g., the Internet), packets can be lost because of such problems as congestion and buffer overflow. Approaches to recover from these losses are either closedloop such as TCP, where lost packets are retransmitted or openloop, where pre- and post-processing of communicated data are employed. When a 3-D graphics application is time-sensitive, a closed-loop approach may not be possible. In this paper, we only consider the open-loop approach.

One pre-processing solution is to employ forward error correction (FEC) codes to protect the bit-stream against packet losses. These methods fall into two classes: equal error protection (EEP) and unequal error protection (UEP). EEP methods apply the same

FEC code to all parts of the bit-stream regardless of the contribution of each part to the decoded mesh. EEP is suitable when the channel has a low packet loss rate. However, at higher packet loss rates, important parts of the bit-stream might be lost, which results in a considerable degradation in the decoded mesh quality. In this case, UEP is more suitable since important parts of the bit-stream get higher level of error protection than other parts.

In this paper, we propose a source-and-channel-dependent error protection method that applies optimal FEC code rates based on a statistical distortion measure for unequal error protection (UEP)1. We consider 3-D meshes that are encoded progressively into different layers, or batches of refinements . Then, we design the channel codes to guarantee minimum distortion for a certain channel behavior.

The forward error correction codes used in this paper are the Reed-Solomon (RS) codes. These block codes are perfectly suited for error protection against bursty packet losses because they are maximum distance separable codes, i.e., there are no other codes that can reconstruct erased symbols from a smaller number of code symbols [4].

2. BACKGROUND

The recently proposed Compressed Progressive Mesh (CPM) [3] is one of the most effective algorithms for progressive compression of 3-D meshes. Due to its efficacy, this method will be used in this paper, although the proposed contributions are also applicable to other progressive compression methods with minor modifications. This section summarizes the CPM algorithm to provide insight about the encoded bit-stream content. The packetization method is also described briefly at the end of this section.

2.1. CPM Encoded Bit-Stream Format

CPM depends on two operations: edge-collapse and vertex-split. These two operations are illustrated in Figure 1. The edge-collapse and the vertex-split operations are applied at the encoder and the decoder, respectively.

The encoding process is iterative. At the beginning of every iteration, a subset of edges is chosen to be collapsed. These edges have to satisfy certain conditions so that they can be collapsed within the current batch [3].

Such restrictions make the edges being collapsed independent of each other, and hence, the decoding process (vertex-split oper-

1Optimality should be understood within the context of the particular distortion measure and the FEC codes used in our approach.

0-7803-7402-9/02/$17.00 ?2002 IEEE

II - 2041

ation) for a given vertex is independent from others. On the other

hand, these restrictions limit the reduction rate between consecu-

tive LODs. Simulations show a reduction of about ? ?

?

?

?

in

the number of vertices between two consecutive LODs for a typi-

cal mesh.

Fig. 1. edge-collapse and vertex-split operations.

Before generating the bit-stream for the collapse operations of

a certain LOD, the vertices are first sorted. Then, the bit-stream

is appended accordingly. Both the encoder and the decoder should

have the same ordering of the vertices at the beginning of each iter-

ation. Each edge-collapse operation is represented by two classes

of information, namely connectivity and geometry. Connectivity

information specifies whether a vertex is to be split or not as well

as the corresponding edges to be split while geometry information

specifies the coordinates of the new added vertices.

CPM produces batches in addition to the base-mesh. Ap

plication of each batch yields a different level of detail (LOD) that

further approximates the original 3-D mesh. The base-mesh can

be compressed using any single-level mesh compression technique

that exists in the literature. In this paper, the base-mesh is com-

pressed using the TG-algorithm [2]. On average, this algorithm

has a compression rate of bits per vertex for both geometry and

connectivity information [2].

2.2. Packetization Scheme

In our work, we adapt a packetization method known as block

of packets (BOP) [5]. In this method, the data is placed in hori-

zontal packets and then FEC is applied across BOP packets, ver-

tically. Such a method is most appropriate for packet networks

where burst errors are common [5]. The decoder combines all

correctly-received packets of a certain BOP and counts the num-

ber of lost packets. Let ( ) be the RS code applied to a given

BOP. Then, if the number of lost packets is not more than ( ),

then the decoder will be able to recover all lost packets in this BOP.

Otherwise, the decoder considers these packets as lost and irrecov-

erable. If a certain part of the bit-stream is not decoded, then it and

all parts received afterward are considered lost. In our UEP imple-

mentation, the base-mesh bit-stream is packetized into one BOP

while every LOD's batch is packetized into one BOP. Hence, for

-level encoded bit-stream, there are

BOPs. Each BOP is

"

protected with an optimal FEC code (RS-code) that is derived to

maximize the decoded mesh quality at the receiver as detailed in

Section 3.

3. PROPOSED METHOD

To develop an UEP method that is scalable with respect to variations in both channel bandwidth and channel error characteristics, we first develop a distortion measure that quantifies the expected mesh quality at the decoder. Its arguments consist of (i) a 3-D mesh, (ii) the number of bits available for forward error correction, and (iii) the channel characteristics. Based on this distortion

measure, we define a mapping that determines the number of error

protection bits to be assigned for the

layers (i.e., , ...,

"

'

(

0

1

). To this effect, we compute a rate-distortion (RD) curve

'

(

3

1

to take the source characteristics into account. This curve assigns

each batch two numbers: (1) the number of bits to encode a par-

ticular batch and (2) an estimate of the distortion that would result

if it gets lost. Next, we integrate the channel packet loss models

and the error correction capabilities of FEC codes into the batch-

by-batch rate-distortion analysis to come up with a statistical dis-

tortion measure at the decoder. Then, we perform an optimization

process that estimates the number of error correction bits added to

each layer. The objective of the optimization process is, of course,

to maximize the mesh quality in a statistical sense.

3.1. Statistical Distortion Measure

Generally speaking, mesh distortion estimation depends on the de-

coding strategy. As discussed in Section 2.1, each batch contains

both connectivity and geometry information. The former specifies

the vertices to be split in the coarser LOD while the latter provides

the coordinates of the newly added vertices. Therefore, although

losing part of the geometry information (but not the connectivity

information) affects the decoded mesh quality, the decoding pro-

cess can still continue since the decoder can keep track of vertices

to be split. In contrast, if part of the connectivity information is

lost, the decoder will be working on the incorrect set of vertices

and the decoded mesh features will be different from the original.

Hence, the decoding process terminates whenever lost packets can-

not be recovered.

Following this standard decoder operation, the expected dis-

tortion at the received mesh is the sum of the products , where

4

5

7

5

@ is the LOD index, is the distortion that would result when the

7

5

decoding stops at

5 , and is the probability of having an

A

B

D

4

5

(

1

irrecoverable packet loss at

5 . We estimate the distortion

A

B

D

(

1

by using a conservative upper bound on the maximum distortion

that results from edge collapses in a batch, which might be ob-

tained using the maximum of the distances between each vertex

H of the simplified mesh and the planes that support the original

triangles that were incident upon all the vertices that collapsed to

H [3]. , the probability of discontinuing the decoding operation

4

5

at

5 , is a conditional probability. It is equal to the products

A

B

D

(

1

of the probabilities of correctly decoding all data before

, 5

A

B

D

(

1

and but not being able to decode the @

I

Q

batch. In equation form,

the expected mesh distortion at the decoder is written as follows:

g

u

5

S

T

V

X

`

a

c

e

g

i

e

c

Q

q

r

0

s

w 3

5

x

u X

q

5

u

x

X

q

r

(1)

5

0

where

is the number of layers including the base-mesh,

"

is

the

probability

of

having

irrecoverable

packets

in

the

@

I

Q

4

5

layer, and is the error between the transmitted mesh and the

7

5

g

u

mesh after decoding the

5 bit-stream. In Equation 1,

A

B

D

(

1

4

5

is known as the block error rate (BER) and is defined as the prob-

ability of losing more than

packets in the @

I

Q

layer, since

5

RS code

applied at this layer will recover fewer number of

5

packet losses. This quantity can be calculated in terms of the block

error density function as:

4

5

d

i x

g

e

u

f

h

(2)

where

is the block error density function, i.e., the proba-

h

f

bility of losing symbols within a block of symbols. Replacing

h

II - 2042

the block error rate (BER) in Equation 1 with the expression given in Equation 2 results in the following distortion function:

g

u

S

T

V

i x

g

?

u

?

?

q

?

r

0

s

3

5

x

i x

g

e

?

?

u

?

?

?

q

0

5

q

r

?

5

?

u

i

g

u

?

x

x

(3)

So far, the only quantity in Equation 3 that has not yet been calcu-

lated is the block error density function,

, which depends

h

f

on the channel model. In this paper, we used the G-E model, which

has been thoroughly investigated in the literature [5]. The reader

is referred to [5] for the exact expressions of

.

h

f

After formulating the expected distortion introduced at the de-

coded mesh, we need to optimize the function given in Equation 3

to maximize the decoded mesh quality. The setup of the optimiza-

tion problem and its solution are presented in Section 3.2.

3.2. Optimal FEC Code Rates

Equation 3 estimates the expected distortion introduced at the de-

coded mesh in a statistical sense. Now, the objective is to minimize

this distortion with respect to the set of 's in Equation 3. In-

5

tuitively, the base-mesh is usually regarded as the most important

layer in the encoded bit-stream, followed by the coarsest LOD, and

so on, till the finest LOD. Therefore, we expect that the optimiza-

tion process to allocate more error-protection bits to the base-mesh

and the first few coarse layers.

The

quantities, , in Equation 3 must satisfy two main

"

5

conditions. First, the error protection bit budget is upper-bounded

by , the number of available error protection bits. The second '

and more-obvious constraint is that cannot be greater than .

5

Combining Equation 3 with the above two conditions results in a

constrained optimization problem given as follows:

!

"

:

$

%

g

u

i x

g

?

u

?

?

q

?

r

0

s

3

5

x

i x

g

e

?

?

u

?

?

0

5

q

r

?

5

u

i

g

u

?

?

x

x

?

q

Subject to:

(4)

3

5

x

2

3

0

?

3

5 '

V 5 '

?

)

1

V 5

2

6

6

6

8

(5)

where1 is the RS symbol size (i.e., the RS code is defined over 9

@

A

).

The first constraint in Equation 5 forces the solution to be scal-

able with respect to error-protection bit budget. However, the sec-

ond constraint in Equation 5 is a natural constraint of any FEC

block code. Similarly, incorporating the block error density func-

tion in Equation 4 forces the solution to be scalable with respect

to channel error characteristics. The above non-linear constrained

optimization problem can be solved numerically using the search

algorithms in [6].

It is anticipated that for an error-free channel, the solution of

this optimization problem produces equal 's while in the case

5

of a high packet loss rate channel, the base-mesh consumes most

of the available error-protection bit budget while other layers get

a very small portion of it, if any. The above optimization problem has been solved for a number of meshes. Table 1 list the optimal RS codes for the mesh SMALL BUNNY for a set of packet loss rates. As can be seen from this data, the solution is scalable with respect to variations in channel error characteristics. It is also inherently bandwidth-scalable due to the embedded encoding. The following section discusses the experimental results in detail.

C

D

0.00

base-mesh 0.71

?

F

G

I

0.71

F

G

I

Q

0.71

F

G

I

T

0.71

0.04

0.68

0.71

0.71

0.75

0.10

0.62

0.73

0.75

0.76

0.20

0.57

0.76

0.73

0.79

0.30

0.48

0.71

0.71

0.95

C

D

0.00

(a) Corresponding EEP RS code is (63,45). ?

base-mesh

F

G

I

F

G

I

Q

F

G

I

T

0.90

0.90

0.90

0.90

0.08

0.79

0.94

0.90

0.98

0.10

0.79

0.94

0.90

0.98

0.20

0.65

0.98

0.98

1.00

0.30

0.65

0.98

0.98

1.00

(b) Corresponding EEP RS code is (63,57).

Table 1. The optimal RS codes used in the proposed UEP method

for the SMALL BUNNY mesh that is compressed in three LODs in

addition to the base-mesh (

? ). The RS codes are defined

@

A

over

U . ( =5.0)

A

W

4. SIMULATION RESULTS

To demonstrate the efficacy of the proposed UEP method, we used

both subjective and objective methods of comparison. In particu-

lar, we used the Hausdorff Distance between densely sampled Y

points on the original (transmitted mesh) and the decoded mesh

as an objective comparison metric. In these experiments, the error

protection bit budget (i.e., ) is assumed to be given. For fair com'

parison between the three methods (i.e., no error protection (NEP),

equal error protection (EEP) and unequal error protection (UEP)),

the bit budget for source coding (i.e., mesh compression) is re-

duced by bits when equal or unequal error protection is applied. '

That is, in all approaches, total number of bits spent for source and

channel coding is the same. Also, all meshes are compressed into

four levels (i.e.,

? ).

Figure 2(a) depicts as a function of the packet loss rate

Y

4

W

for the SMALL BUNNY mesh. Three curves in this figure represent

the cases of EEP, UEP, and NEP. The FEC bit budget is

?

'

a

c

bytes, corresponding to a channel rate of in the EEP approach.

e

f

U

g

As shown in Figure 2(a), when the packet loss rate is less than 0.04

(i.e., 4%), NEP performs better than both EEP and UEP because

it uses more bits for source coding, and the channel coding bits

(used in EEP and UEP) are largely redundant at such low packet

loss rates. Comparing these three curves, we can conclude that the

UEP method outperforms EEP when the packet loss rate is higher

than 0.02 while it outperforms NEP when the packet loss rate is

higher than 0.04 (i.e., 4%). Moreover, using the proposed method,

the quality of the decoded mesh degrades more gracefully as the

packet loss rate increases. As expected, at low packet loss rates

(less than 0.06), the performances of both EEP and UEP methods

are close to each other since the RS code is capable of recover-

ing lost packets in both cases. On the other hand, at higher packet

loss rates, the base-mesh in the EEP method is subject to packet

losses that cannot be recovered by the RS code. Nevertheless, the

UEP method can still recover these lost packets since the error pro-

tection level that is associated with the base-mesh is significantly

II - 2043

higher than the corresponding one in the EEP method. This results in a better performance of the UEP method compared to the EEP method. Finally, the optimal RS codes assigned for each layer, in the UEP case, are tabulated in Table 1(a). These optimal RS code rates are calculated by solving the constrained optimization problem given in Equations 4 and 5.

Increasing the RS-code rate further to (63,57) results in a performance of EEP that is close to NEP performance as expected. The number of channel coding bits in this case is much less than the ones in the above case shown in Figure 2(a). Nevertheless, UEP outperforms NEP and EEP when the packet loss rate is higher than ? ? (i.e., 2%) as depicted in Figure 2(b). The optimal RS code rates used in this case are tabulated in Table 1(b).

(a) Original SMALL BUNNY mesh

(b) NEP - =0.15

4

9

(c) NEP - =0.35

4

9

(d) EEP - =0.15

4

9

(e) EEP - =0.35

4

9

(f) UEP - =0.15

(g) UEP - =0.35

4

9

4

9

(a)

Fig. 3. Subjective results of applying no error protection (NEP),

equal error protection (EEP), and unequal error protection (UEP)

methods on the SMALL BUNNY mesh where the RS code used for

the EEP case is (63,45). The caption under every images gives the

error-protection method and the packet loss rate of the channel.

(b)

Fig. 2. Maximum Error (Hausdorff Distance) between the trans-

mitted and the decoded SMALL BUNNY meshes when the RS code

used for EEP is (a)

?

?

and (b)

=

?

?

?

.

?

?

We also compare the three methods, NEP, EEP and UEP, sub-

jectively. Figure 3 shows the experimental results for the SMALL

BUNNY mesh when the RS-code ? ?

is used to protect all lay-

=

?

ers in EEP. As shown, UEP keeps a reasonable decoded mesh qual-

ity level as the packet loss rate increases.

5. CONCLUSIONS

In this paper, we presented an error-resilient method for 3-D mesh

transmission. The proposed method is scalable with respect to both

channel bandwidth and channel error characteristics. The bit bud-

get allocation method assigns optimal RS-code rates for different

layers in the encoded bit-stream to maximize the decoded mesh

quality. These optimal RS codes depend on: the error protection

bit budget, the channel packet loss rate, and batch-by-batch rate-

distortion characteristics of the source mesh. Experimental results

show that with our UEP approach, the quality of the decoded mesh

degrades more gracefully as the packet loss rate increases. More-

over, the applicability of the proposed UEP method does not de-

pend on a particular 3-D mesh compression algorithm, although

we used CPM in this paper. Generally, the proposed unequal error

protection method outperforms the other two methods (i.e., no er-

ror protection and equal error protection) for all packet loss rates

higher than %, where is a function of the error-protection bit

?

?

budget and the source mesh characteristics. This is due to the '

fact that our distortion measure produces optimal FEC code rates

by incorporating both source and channel characteristics into the

distortion function. In this work, the error protection bit budget, , at a particular packet loss rate is assumed to be given. That

'

is, the bit-allocation between the source and channel coding is already determined. However, this source-channel bit-allocation can best be determined using a rate-distortion analysis. That is, rather than , the total number of bits should be specified, and the op-

'

timization algorithm should determine what percentage of those bits should be spent for error-protection in addition to how they should be distributed among the layers. This extension will be part of another study.

6. REFERENCES

[1] G. Taubin and J. Rossignac, "Geometric compression through topological surgery," ACM Transactions on Graphics, vol. 17, no. 2, pp. 84?115, April 1998.

[2] C. Touma and C. Gotsman, "Triangle mesh compression," in Proceedings of Graphics Interface, Vancouver, Canada, June 1998.

[3] R. Pajarola and J. Rossignac, "Compressed progressive meshes," IEEE Transactions on Visualization and Computer Graphics, vol. 6, no. 1, pp. 79?93, January-March 2000.

[4] R. E. Blahut, "Theory and practice of Error Control Codes", Addison-Wesley, Reading, MA, 1983.

[5] U. Horn, K. Stuhlmuller, M. Link, and B. Girod, "Robust internet video transmission based on scalable coding and unequal error protection," Signal Processing: Image Communications, vol. 15, pp. 77?94, 1999.

[6] S. S. Rao, Engineering Optimization, John Wiley, 1996.

II - 2044

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download