WakeSpace Scholarship | ZSR Library



USING POISSON’S EQUATION TO CHARACTERIZE BRAIN TUMOR SHAPE

BY

BENJAMIN J. SINTAY

A Dissertation Submitted to the Graduate Faculty of

WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES

in Partial Fulfillment of the Requirements

for the Degree of

DOCTOR OF PHILOSOPHY

Biomedical Engineering

in the program of

VIRGINIA TECH – WAKE FOREST UNIVERSITY

SCHOOL OF BIOMEDICAL ENGINEERING AND SCIENCES

December 2008

Winston-Salem, North Carolina

Approved By:

J. Daniel Bourland, Ph.D., Advisor ____________________________________

Examining Committee:

Robert Plemmons, Ph.D., Chair ____________________________________

Carnell Hampton, Ph.D. ____________________________________

Michael T. Munley, Ph.D. ____________________________________

Christopher Wyatt, Ph.D. ____________________________________

ACKNOWLEDGEMENTS

First and foremost, this work is dedicated to my parents, Jerry and Donna Sintay who have been there for me through every step of life. I attribute all of my successes and motivation to their selfless efforts. Mom and Dad – thank you for providing a home that celebrated creativity, integrity, and hard work. The completion of this step in my life honors both of our accomplishments.

I am especially grateful for the guidance and friendship of Dr. J. Daniel Bourland, my academic advisor. Dr. Bourland has been extremely supportive, professional, and inspiring during my time at Wake Forest University. It has truly been a pleasurable experience working for him – one that I will cherish for many years to come. To the members of my committee – Drs. Carnell Hampton, Robert Plemmons, Michael Munley, and Christopher Wyatt – thank you for your investment of time for my sake. Your insights have been extremely valuable in improving this dissertation.

I am grateful for the help of Rebecca Bishop in Clinical Trials for preparing the survival data for this dissertation and Dr. Stephen Tatter for spending many hours reviewing tumor segmentations and providing much inspiration for this work.

A special thank you is in order for the faculty and staff at Wake Forest University Baptist Medical Center, especially the members of the Radiation Oncology Physics Section. I could not have asked for a better group of people to foster my interest and understanding of the medical physics profession. Countless hours (and gray hairs) have been selflessly given for the sake of my education both in clinical practice and academic exercise. Thank you for allowing me to become a vital part of the daily operations of the clinic.

Thank you to the faculty, staff, and graduate students at Wake Forest University and within the Virginia Tech – Wake Forest University School of Biomedical Engineering and Sciences. I am extremely lucky to have been a part of this group. I cannot say enough about how wonderful the graduate school experience has been. I am especially thankful to the post-docs and graduate students in the Physics Section past and present.

To Dr. Joel Berry, Herbie Burns, and Dan Wilson – thank you for providing an escape from the pressures of graduate school. You all have been like brothers to me. To Kelli – I am lucky to have such a wonderful sister who is kind and understanding. You have always been there for me. To Monica – thank you for bringing joy into my life every day. I am proud to be your husband, and I look forward to the wonderful life that we will share together. You inspire me to do great things, and most importantly, make it all worthwhile. Finally, thank you to God for providing me with the strength and ability to accomplish my dreams.

TABLE OF CONTENTS

LIST OF TABLES viii

LIST OF FIGURES ix

LIST OF ABBREVIATIONS xviii

ABSTRACT xx

1. INTRODUCTION 1

1.1. SHAPE RECOGNITION 2

1.2. SHAPE METRICS SURVEY 4

1.3. CLINICAL SIGNIFICANCE 6

1.4. IMAGE ACQUISITION 11

2. NUMERICAL METHOD 14

2.1. POISSON’S EQUATION 14

2.2. FINITE DIFFERENCE METHOD 16

2.3. ITERATIVE TECHNIQUES FOR SOLVING POISSON’S EQUATION WITH FINITE DIFFERENCES 18

2.3.1. JACOBI’S METHOD 19

2.3.2. THE GAUSS-SEIDEL METHOD 21

2.3.3. ERROR AND PROCESSING TIME 23

2.3.4. CONVERGENCE 25

3. SHAPE CHARACTERIZATION 26

3.1. TRANSLATION, ROTATION, AND SCALE INDEPENDENCE 27

3.2. LEVEL-SET COMPLEXITY 29

3.3. DISPLACEMENT MAP 30

3.4. SHAPE CHARACTERISTIC CURVE 36

4. STATISTICAL FRAMEWORK 40

4.1. PERMUTATION TESTS 41

4.2. RANDOM PERMUTATION TESTS 42

4.3. SHAPE CHARACTERISTIC MONTE-CARLO PERMUTATION TEST 44

4.4. CLINICAL EXAMPLE: NORMAL NEUROANATOMY 45

5. SOFTWARE TOOLSET 55

5.1. IMPLEMENTATION 55

5.2. FRAMEWORK AND NOTEABLE ALGORITHMS 57

6. VALIDATION STUDIES 61

6.1. ANALYTICAL SOLUTION COMPARISON 61

6.2. TWO-DIMENSIONAL SYNTHETIC STUDIES 66

6.2.1. COMPARATIVE STUDY 66

6.2.2. VARIOUS STRUCTURE STUDY 72

6.2.3. TRANSLATION, ROTATION, AND SCALE INDEPENDENCE 76

6.2.4. TWO-CLASS 2D SILHOUETTE STUDY 81

6.3. MULTI-FIGURE SYNTHETIC SHAPE STUDIES 86

6.4. THREE-DIMENSIONAL SYNTHETIC STUDIES 91

6.5. THREE-DIMENSIONAL ANATOMIC TEST STUDIES 94

6.6. PHYSICAL PHANTOM STUDIES 100

7. CLINICAL TRIAL 113

7.1. BACKGROUND 113

7.2. METHODS 115

7.3. RESULTS 117

7.4. DISCUSSION 138

8. CONCLUSIONS AND FUTURE DIRECTIONS 141

REFERENCES 144

APPENDIX A: INDEX OF IMPORTANT ROUTINES AND CODE 150

EXPERIMENT SCRIPTS 150

CORE FUNCTIONS 152

APPENDIX B: TOOLSET CODE 156

VITAE 177

LIST OF TABLES

Table 1: A summary of the Monte-Carlo permutation test brain studies. 54

Table 2: Clinical trial statistical significance for each month post-diagnosis including expired and surviving subject counts. Significant p-values are bolded. 136

LIST OF FIGURES

Figure 2.1: Central differences look at the difference in the central node. 17

Figure 2.2: A simple example of the Jacobi method for iteratively solving Poisson’s equation for a 3x3 grid with zeros on the boundary. 20

Figure 2.3: A simple example of the Gauss-Seidel method for iteratively solving Poisson’s equation for a 3x3 grid with zeros on the boundary. 22

Figure 3.1: Shape is scale, translation, and rotation independent. 28

Figure 3.2: The solution to Poisson’s equation for the interior of a square. 29

Figure 3.3: The level sets of the solution to Poisson’s equation. 29

Figure 3.4: The arc plane flies due to the earth’s round shape is analogous to a streamline for computing geodesic distance. 31

Figure 3.5: Displacement map of a two-dimensional square where the maximum value is given by dark red and the minimum value is given by dark blue. 32

Figure 3.6: Streamlines (black arrows) along the level sets of the potential for an arbitrary object. 32

Figure 3.7: Plot of the normalized gradient for a 5x5 2D square grid. Vectors point toward the maximum. 33

Figure 3.8: A basic shape characteristic curve of a square. 37

Figure 3.9: Shape characteristic curves for four basic geometric shapes. 39

Figure 4.1: The shape characteristic curves of the right hippocampus (red) and brainstem (blue) for 18 normal subjects. 46

Figure 4.2: The baseline mean of the right hippocampus (red) and brainstem (blue) for 18 normal subjects. 47

Figure 4.3: The histogram of the permutation test of the difference between the right hippocampus and the brainstem. 48

Figure 4.4: The shape characteristic curves of the right (red) and left (blue) hippocampus for 18 normal subjects. 49

Figure 4.5: The baseline mean shape characteristic curves of the right hippocampus (red) and left hippocampus (blue) for 18 normal subjects. 49

Figure 4.6: The histogram of the permutation test of the difference between the right and left hippocampus for 18 normal subjects. D0 = 0.0324. 50

Figure 4.7: The shape characteristic curves of the right hippocampus for 18 normal subjects. 51

Figure 4.8: The histogram of the permutation test of the difference between the right hippocampus for 18 normal subjects. D0 = 0, thus, the entire distribution is greater than the baseline verifying the null hypothesis as expected. 51

Figure 4.9: The shape characteristic curves of the 3rd ventricle (red) and 4th ventricle (blue) for 18 normal subjects. 52

Figure 4.10: The baseline mean shape characteristic curves of the 3rd ventricle (red) and 4th ventricle (blue) for 18 normal subjects. 53

Figure 4.11: The histogram of the permutation test of the difference between the 3rd and 4th ventricles for 18 normal subjects. D0 = 0.104. 53

Figure 6.1: Image of the potential of a circle found by using the analytical, direct method. 62

Figure 6.2: Image of the potential of a circle found iteratively using a Gauss-Seidel traversal. 62

Figure 6.3: Error map of discrepancies between the direct and iterative methods for computing the solution to Poisson’s equation. 63

Figure 6.4: Image of the potential of a square found by using the analytical, direct method. 64

Figure 6.5: Image of the potential of a square found iteratively using a Gauss-Seidel traversal. 64

Figure 6.6: Error map of discrepancies between the direct and iterative methods for computing the solution to Poisson’s equation. 65

Figure 6.7: Binary silhouettes of synthetic shapes from left to right; a circle, oval, square, and triangle. 68

Figure 6.8: The solution to Poisson’s equation, or the potential for a square (left) and the corresponding displacement map (right). 70

Figure 6.9: The solution to Poisson’s equation, or the potential for a circle (left) and the corresponding displacement map (right). 70

Figure 6.10: The solution to Poisson’s equation, or the potential for an oval (left) and the corresponding displacement map (right). Arrows identify areas of minor numerical error. 71

Figure 6.11: The solution to Poisson’s equation, or the potential for a triangle (left) and the corresponding displacement map (right). Arrows identify areas of minor numerical error. 71

Figure 6.12: The shape characteristic curves for a circle (dashed line), oval (dash-dotted line), square (solid line), and triangle (dotted line). 72

Figure 6.13: Binary representation of the test shapes. 73

Figure 6.14: The solution to Poisson’s equation for nine different shapes. 74

Figure 6.15: The displacement maps of nine different test shapes. 75

Figure 6.16: The shape characteristic curves of nine different 2D synthetic shapes. 76

Figure 6.17: Various squares (top row) and triangles (bottom row). 77

Figure 6.18: The solution to Poisson’s equation for various squares. 77

Figure 6.19: The solution to Poisson’s equation for various triangles. 78

Figure 6.20: The displacement map for varying squares. 78

Figure 6.21: The displacement map for varying triangles. 79

Figure 6.22: The shape characteristic curves of the squares (red) and triangles (blue) used in the translation, rotation, and scale independence study. 79

Figure 6.23: The mean shape characteristic curves of each class. 80

Figure 6.24: The histogram of the Monte-Carlo permutation test, D0 = 0.2937. 80

Figure 6.25: 18 cat silhouettes. 82

Figure 6.26: 18 dog silhouettes. 82

Figure 6.27: The potential for 18 cats. 83

Figure 6.28: The potential for 18 dogs. 83

Figure 6.29: The displacement map for 18 cats. 84

Figure 6.30: The displacement map for 18 dogs. 84

Figure 6.31: The shape characteristic curves for 18 cats and 18 dogs. 85

Figure 6.32: The mean shape characteristic curves. 85

Figure 6.33: The permutation test histogram for the cat and dog study, D0 = 0.0908. 86

Figure 6.34: Multi-figure images. 88

Figure 6.35: Multi-figure images. 89

Figure 6.36: Multi-figure shape characteristic curves. 90

Figure 6.37: The potential (left) and displacement map (right) for a 3D cube. 91

Figure 6.38: Comparison of 2D and 3D shape characteristic curves for complementary objects. 92

Figure 6.39: The shape characteristic curves for various 3D cubes and spheres. 93

Figure 6.40: The mean shape characteristic curves for the groups of cubes and spheres. 93

Figure 6.41: The permutation test histogram from the 3D spheres and cubes study, D0 = 0.703. 94

Figure 6.42: Brain image from the IBSR (left) and an example segmentation (right). 95

Figure 6.43: The potential for a 3D volume of the human caudate nucleus. 96

Figure 6.44: The displacement map for a 3D volume of a human caudate nucleus. 97

Figure 6.45: The shape characteristic curve for a normal human caudate nucleus. 97

Figure 6.46: Shape characteristic curves for 18 different healthy adult caudate nuclei (right side) of the brain. 98

Figure 6.47: Comparison of the right (red) and left (left) caudate nuclei in 18 normal human subjects. 99

Figure 6.48: Comparison of the right pallidum (red) and the right caudate (blue). 99

Figure 6.49: Six water-filled sphere phantoms. 101

Figure 6.50: Three water-filled cube phantoms of varying size. 101

Figure 6.51: Physical phantom MRI coronal view. 102

Figure 6.52: Axial slice of physical phantoms revealing an example of a screw in cube phantoms (top of left cube). 103

Figure 6.53: Segmentation contours of a sphere and cube. 103

Figure 6.54: Shape characteristic curves for an ideal cube and three MRI-scanned cubes with various voxel dimensions. 105

Figure 6.55: Shape characteristic curves for an ideal sphere and six MRI-scanned spheres with various voxel dimensions. 106

Figure 6.56: Shape characteristic curve comparison for three cubes (red) and six spheres (blue), all downsampled to the maximum voxel dimension to create a relatively low-resolution binary map. 107

Figure 6.57: Mean shape characteristic curves of the low resolution isotropic phantom data. 108

Figure 6.58: The permutation test histogram for the low resolution isotropic phantom discrimination test. D0 = 0.19435. 108

Figure 6.59: The shape characteristic curves of the finely sampled phantom data. 109

Figure 6.60: Mean shape characteristic curves of the finely sampled phantom data, plotted with the curves of the ideal shapes. 110

Figure 6.61: The permutation test results of the finely sampled phantom data. D0 = 0.44819. 110

Figure 6.62: The mean shape characteristic curves of the upsampled (fine; blue) and downsampled (coarse; red) sphere phantom data. 111

Figure 6.63: The results of the permutation test comparing the finely and coarsely sampled spheres. D0 = 0.24453. 112

Figure 7.1: Two different examples of malignant glioma, grade IV (GBM). 114

Figure 7.2: Survival curve for GBM clinical trial (N = 98). 118

Figure 7.3: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than one month. 120

Figure 7.4: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than two months. 121

Figure 7.5: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than three months. 122

Figure 7.6: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than four months. 123

Figure 7.7: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than five months. 124

Figure 7.8: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than six months. 125

Figure 7.9: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than seven months. 126

Figure 7.10: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than eight months. 127

Figure 7.11: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than nine months. 128

Figure 7.12: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than ten months. 129

Figure 7.13: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than eleven months. 130

Figure 7.14: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than twelve months. 131

Figure 7.15: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than thirteen months. 132

Figure 7.16: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than fourteen months. 133

Figure 7.17: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than fifteen months. 134

Figure 7.18: Statistical significance of clinical trial permutation tests shown for each month. 135

Figure 7.19: Survival versus tumor volume for clinical trial subjects. 137

LIST OF ABBREVIATIONS

|2D |Two-dimensional |

|3D |Three-dimensional |

|4D |Four-dimensional |

|CM |Centimeters |

|CT |Computed Tomography |

|CTV |Clinical Target Volume |

|DICOM |Digital Imaging and Communications in Medicine |

|DNA |Deoxyribonucleic Acid |

|DVH |Dose Volume Histogram |

|FFT |Fast Fourier Transform |

|GB |Gigabyte |

|GTV |Gross Tumor Volume |

|IBSR |Internet Brain Segmentation Repository |

|IGRT |Image-guided Radiation Therapy |

|IMRT |Intensity-modulated Radiation Therapy |

|MB |Megabyte |

|MIPAV |Medical Image Processing, Analysis, and Verification |

|MLC |Multi-leaf Collimator |

|MM |Millimeters |

|MRI |Magnetic Resonance Imaging |

|MRS |Magnetic Resonance Spectroscopy |

|OAR |Organ at Risk |

|PACS |Picture Archiving and Communication System |

|PDE |Partial Differential Equation |

|PET |Positron Emission Tomography |

|PTV |Planning Target Volume |

|RT |Radiation Therapy |

|SPGR |Spoiled Gradient Recalled |

|PX |Pixels |

|TPS |Treatment Planning System |

|VOI |Volume of Interest |

ABSTRACT

Sintay, Benjamin J.

USING POISSON’S EQUATION TO CHARACTERIZE BRAIN TUMOR SHAPE

Dissertation under the direction of

J. Daniel Bourland, Ph.D., Associate Professor of Radiation Oncology

The role that tumor shape plays in disease treatment, progression, and outcome is not well understood. This work investigates the quantification of shape information required for the statistical analysis of tumors. The statistical shape characterization method proposed by Haidar et al. 2006 that uses Poisson’s equation is adapted for tumor shape parameterization. An extensive algorithm using finite differences and a robust software toolset have been developed for efficient computation of the solution to Poisson’s equation for three-dimensional tumor volumes as well as general anatomic features. A characteristic plot that is unique to a given shape is generated. The statistical framework of this method utilizes a Monte-Carlo permutation test to establish significance between sets of characteristic plots. The method has been thoroughly tested for two-dimensional (2D) and three-dimensional (3D) test datasets that include simple geometric shapes as well as contoured anatomical volumes from images of the human brain.

The shape characteristic curve is able to distinguish subtle differences in simple 2D and 3D phantoms as well as human brain anatomy. Such differences are difficult to quantify and often do not correlate to volume. Scale and translation insensitivity of the method are also proved. It is also shown that voxels can be upsampled, or resliced, to form images suitable for shape characterization for datasets with anisotropic data such as those acquired from magnetic resonance imaging (MRI) scans. The methods and software toolset have been applied in a retrospective, image-based clinical trial to examine the shape characteristic curves of pre-operative glioblastoma multiforme (GBM) tumors. This preliminary clinical trial shows that shape characteristics for these tumors can be predictive of outcome (survival) for a population of approximately 100 subjects up to one year following diagnosis.

INTRODUCTION

Shape-based methods for analyzing human anatomy have become increasingly popular in the past decade. Recent research has shown that many anatomical structures reveal distinct features through intrinsic geometry that are otherwise indeterminate from popular metrics that are shape independent, such as volumetric measurements3, 8, 9, 28-33. A regular topic among shape metric researchers involves the analysis of the complex geometry of the human brain3, 8, 24-27, 30-33. Such methods provide various ways to quantify, classify, and characterize brain tissues within medical images with varying degrees of precision, accuracy, ease of use, and clinical relevance. However, a significant problem with many of these methods is the lack of simplicity and clear outputs appropriate for utilization by non-technical and/or clinical users.

In the field of oncology, medical image-based tumor shape often reveals important information that is hard to quantify. Proper classification based on shape has the potential to provide clinicians with data such as disease type, stage, progression/regression, and outcome10-14, 19. Shape becomes even more important in the radiation oncology specialty where target anatomy must be identified, quantified, and tracked. Without quantification, it is difficult to study the geometric and morphologic characteristics of tumors in relation to treatment planning, targeting, and outcome. Thus, there is a significant need to measure tumor shape in a manner that provides a significant amount of accuracy and stability. The goal of this research is to better understand and apply shape metrics to the characterization of brain tumors using a toolset developed for efficient computation of the metric. The result of this work has several parts: (1) a mathematical framework and software toolset by which the shape of a tumor can be quantified solely based on its geometric composition; (2) a thorough validation of the characterization method to better understand how this feature can improve clinical decision making and tumor analysis; and (3) a clinical trial to understand how brain tumor shape is related to patient outcome.

1 SHAPE RECOGNITION

Humans use shape recognition nearly every minute of their lives. Recognition and classification of shape comes quickly and naturally in our minds, but how does this process actually work? This is an important question that researchers are still trying to answer. The human brain is one of the most intricate and abstract computational devices in the known universe. At the center of this system lies an impossibly complex structure of organic logic capable of learning and recalling. It is this system that allows us to automatically recognize and even quantify, to some extent, shapes based on their visual appearance. The deceivingly simple tasks that we perform throughout the day are extremely difficult to understand, much less model in within a computer system. What does it mean to turn a complex network of biological processes into a binary system of strict arithmetic and logic?

In recent years, recognition of shape within a computer system has become more commonplace3, 8, 9, 24-33. A vast array of computer programs exist that can automatically detect objects of all types including written letters in a document, faces in a photo, roads in a satellite image, etc. But now consider the complexity associated with the years of training that a physician must go through to learn to identify anatomical shapes; moreover, to identify irregular shapes such as tumors.

The process of identifying and dividing a whole into its parts is commonly referred to as segmentation. In the field of medicine, image-based segmentation is a critical process in many different specialties that is most often performed by a highly trained physician. Even after much effort to understand and implement various fully-automated and semi-automated segmentation methods, physician experts are still required to perform a majority of medical image segmentation.

Tissue contrast and spatial relationships are the two key factors in determining contours for segmentation in both human and computer-based approaches. Contrast is simply the difference of one pixel, or area of pixels, compared to other neighboring pixels. However, contrast alone is of limited use anatomically as, generally, the types of objects being considered are larger than one pixel and have a certain amount of connectedness. Thus, there is an intrinsic shape aspect, or geometric relationship, associated with segmentation. Simple, widely-used segmentation methods such as thresholding and clustering ignore these geometric considerations. But in recent years there has been a push to incorporate shape information into segmentation algorithms. These advances are reviewed further in Section ‎1.2.

Within the human mind it is extremely difficult to separate the identification, classification, and general quantification of tumors. However, in this work, shape segmentation and quantification are considered two distinctly different and separate processes. And while it is acknowledged that the recognition of tumors directly relies on shape, this analysis assumes a valid recognition process and focuses directly on the endpoint of shape quantification74, 75. Thus, the purpose of this dissertation is to focus on the important task of shape recognition in the oncologic setting to formulate and test a method that will give clinicians the capability of shape-based determination of lesion type and stage (or grade), and eventually leading to shape-based diagnosis and treatment decisions. This goal requires the abstraction of a method to uniquely characterize the intrinsic geometry of a tumor.

2 SHAPE METRICS SURVEY

A significant amount of research has been conducted in the area of shape metrics in the last two decades. Medial-based models and partial differential equation-based (PDE) methods stand out as the most published methods for the medical imaging field2-9, 28-46. Other shape-based methods, which include filtering, spectral analysis, spherical harmonics and principal components, are numerous76-79. Many represent a small offshoot of a particular approach and may be unique in their application.

Active contours are one of the most widely researched PDE-based metrics used in medical imaging35-40. They are commonly lumped together with “snakes” and “level-set” methods. Their use extends from simple modeling to complex segmentation. In an active contour approach, shape boundaries are deformed based on internal and external energies. The internal energy forces the shape to adhere to its own local curvature while the external energy causes deformation to match an edge in an image. A statistical shape implementation of active contours uses a series of baseline figures to form a statistical model that can be used as a priori information for a given problem35. The model can then be deformed to match the current shape or boundary desired. This change can be measured and quantified if necessary. Such methods are popular and robust approaches for segmentation problems.

A well published method for shape metrics uses a construct called the medial manifold28-33. This work is based on the medial axis, or skeleton, documented first by Blum et al. in the late 1970s34. Pizer et al. have extracted the medial axis into a Riemannian construct known as the m-rep28, 29. The m-rep is a method of discretizing the medial axis of three-dimensional models in a way that preserves the manifold by sparse sampling. Medial models are easily deformed and measured which is why they have been suggested as the solution to many model-based problems. However, to non-experts, the medial manifold is an abstract construct that is not easily understood. Furthermore, medial manifold implementation and manipulation requires highly specialized skill.

Medial models also suffer from regularization problems that confine their ability to work with certain types of important deformations that can occur in tumors. Large changes to the model can occur from small offshoots at the boundary. New medial spokes must evolve to fill in the concavities or convexities, and comparing these changes is extremely difficult. Should an object develop a hole within the interior of the manifold, the m-rep mesh is morphed drastically to accommodate the new boundary. Again, it becomes very difficult to compare two shapes such as a doughnut and a circle, even though the general geometry of the two shapes can be very closely matched (say the hole is very small).

Many methods have been evaluated to find a computational framework that can readily characterize shape. The criterion for selecting a method is based on the ease of computation, simple endpoints that are comprehendible by non-experts, and multidisciplinary robustness of the mathematical basis. PDE-based methods are easily understood by a variety of scientists and engineers. Not only are these methods well characterized, but their computational efficiencies have been thoroughly researched4, 37, 47-50. A large amount of data suggests that PDE-based metrics offer stable solutions to image processing problems41-46. In addition, PDE methods provide multi-scale representations of shape that can be independent of the observer’s position or frame of reference37.

3 CLINICAL SIGNIFICANCE

Approximately one out of every three people will develop cancer at some point in their lives; 46% in men, and 38% in women. In the United States, cancer is one of the leading causes of death, second only to heart disease51. Treatments for cancer patients include a variety of options, some at the discretion of physicians, others at the discretion of the patient. The most frequently used choices include surgery, chemotherapy, radiation therapy, either in single or combination form, and alternatives such as hyperthermia and photodynamic therapy; most depending on disease type and location, as well as a variety of patient-specific factors. Most often the choice of treatment is based on some measure of risk determined by the many prognostic indicators. Improving or adding to these indicators could allow better treatment selection and execution.

Approximately half of those diagnosed with cancer will require radiation therapy. In radiation therapy, a distribution of ionizing radiation, usually high energy photons or electrons, is delivered to a tumor volume either internally or externally. Ionizing radiation is an efficient way to interrupt the cellular growth and reproduction cycle of cells. The genetic material (DNA) within the cell is cleaved, crippling its ability to survive. In many cancer cells, the ability to repair radiation damage is significantly reduced compared to normal tissue. Both cancerous and normal tissues are irradiated while following strict tolerances to the normal tissues as radiation carries with it the threat of injury and secondary malignancies to otherwise healthy tissues. Thus, the goal of radiation treatment is to deliver a very high dose to the target volume while maintaining a dose as low as possible to nearby normal and critical structures.

Nearly all processes in radiation therapy require some form of shape analysis. From the identification of tumors and other important organs and tissue, to the setup and evaluation of a treatment plan, medical professionals use some form of shape detection, investigation, and comparison. Often shapes are reduced to volumetric measurements or are “eye-balled” with no specific measure of intrinsic geometry. As the treatment process becomes more heavily reliant on technology, it is important to use measures and processes that are reproducible and quantifiable10, 11, 16-18, 24-27, 80.

Tumor treatment using radiation therapy requires highly precise dose delivery to a known volume. It is widely accepted that targeting of treatment volumes is limited by organ motion, tumor progression/regression, and setup errors. These changes can cause unintended dose delivery and suboptimal disease control. New technologies have been introduced to help track such changes and include a host of image-guided treatments, commonly known as image-guided radiation therapy (IGRT)81. In IGRT, images are taken at the time of treatment, and then compared to a baseline set of images used in the planning process to ensure the target is in the expected position for treatment. If a change is needed, rigid transformations can be applied in six degrees of freedom. Three translational adjustments can be made as well as three rotational adjustments. Additionally, the treatment field can be scaled to account for volumetric changes. Yue et al. have suggested an approach that allows changes in six degrees of freedom as well as scalar isotropic changes to the volume17. However, geometric variations are not addressed.

These technologies all assume that rigid, isotropic changes have occurred in the target volume. This means that the target is still precisely the same shape at the time of treatment. There is much evidence to support the argument that volume changes include anisotropic deformation of the actual target shape over time17, 52-53. Identifying changes in shape could provide a much clearer picture of the current state of the volume to be treated. A metric that quantifies shape can provide a distinct look at how the geometry has changed since the treatment plan was generated and serve as a reference for when a plan should fail to meet similarity requirements.

Equally important is monitoring tumor response to any form of treatment. Many clinics are now recording and analyzing serial image sets from before, during, and after treatment. Tumors are often strangely shaped and imaging parameters may vary, making direct analysis difficult when comparing two different images of a tumor recorded at various times. Quantification of change would allow simple comparisons over time. Having a direct measure of treatment response and/or disease progression would have a significant impact on clinical studies. Several groups are exploring tracking changes in tumors over time14-15. Haney et al. have proposed a method to track changes in malignant gliomas by volumetric measurements15. While volume is important for tracking overall response, volume alone does not paint a clear picture of how the geometry of a tumor is changing.

Tumor classification is also important in the tumor treatment process, especially when selecting an appropriate treatment. This is where imaging is particularly useful in determining a plan of action as results are available immediately and without the consequences associated with other invasive forms measurement such as surgical resection. Researchers in imaging are shifting attention to automatic image-based methods to categorize and/or stage disease11, 19-22. Such methods are highly reproducible and can save physicians a significant amount of time. Several groups are looking at magnetic resonance spectroscopy (MRS) to classify lesions19-22. Wang et al. use magnetic resonance imaging (MRI) and MRS data to classify a tumor as benign or malignant in order to produce a quick clinical diagnosis19. MRS produces a mapping of biochemical properties to a physical location but does not account for the spatial organization of tissue. We believe that shape can reveal features of a tumor that aid in disease classification either on its own or in combination with another feature such as MRS data.

One final use for shape in oncology is that of tracking changes in a tumor over a period of time or during the course of treatment. This area of research has received a large amount of attention in recent years14-16, 70. Most of this work has centered on volume-based measurements and comparisons. Toolkits such as VETOT have been developed based on active contour and level-set methods to track tumor changes over time14. Shape-based metrics may provide additional classification features which are critical to identify subtle changes in tumors. Our initial work in this area based on the PDE-methods described herein can be found in [70].

It becomes apparent that in each application of shape metrics in the field of oncology it is important to quantify and classify tumor shape and any subsequent changes captured on medical images. Such tasks must be performed by a variety of personnel including surgeons, radiation therapists, dosimetrists, physicians, and physicists. An appropriate metric must appeal to a broad audience while communicating quickly and effectively. It must be capable of handling any sort of deformation such as figure splitting or central degradation (such as a circle becoming a doughnut). It must be able to handle disconnected multifocal tumors or metastatic disease. Finally, it must be able to handle all of these constraints without regard to differences in orientation or scale. These characteristics have governed the method chosen in this work for quantification and representation of shape.

For this study, we have chosen to focus on brain tumor classification in a retrospective, image-based clinical trial using anatomical shape acquired from MRI scans that were obtained prior to treatment. This method could prove to be very advantageous as MRI is a fast and effective means for collecting data suitable for prognostic analysis. This is a first step in understanding and applying PDE-based methods to the challenges previously mentioned in this section. Details of the clinical trial employed for these purposes can be found in Chapter ‎7.

4 IMAGE ACQUISITION

There are a variety of ways to acquire the images needed to determine shape metrics for human anatomy. The most common form of medical imaging is the radiograph. A radiograph is created when x-rays (~100 kVp) are captured on film or a digital receptor after attenuation by tissues of the human body82. The variable intensity pattern observed in the image represents the subject’s anatomy (soft tissue, lung, bone, etc.). X-rays were first discovered at the turn of the 20th century by Wilhelm C. Roentgen. This discovery is classified as a bit of an accident, yet it revolutionized the medical field and is still the most widely used modality of medical imaging available today13.

Planar x-ray combined with new methods of digital capture and processing led to the development of computed tomography (CT) in the early 1970s82. CT offers three-dimensional (3D) views of anatomy by utilizing a rotating x-ray beam and detector. Volumes are computed using digital filtered backprojection methods. This method enables the reconstruction of datasets from an arc of 2D fan-beam projection images. More recently, cone-beam CT (CBCT) has enabled the generation of high-resolution axial data in situations where CT is impractical83. CT and CBCT are quick and reliable methods to acquire precise 3D images, but they also carry with them the risk of complications and malignancy from ionizing radiation.

An alternative to the ionizing radiation of CT was also developed in the 1970s and is known as MRI. Not only can anatomy be visualized in three dimensions, but the contrast of different tissues or fluids can be enhanced using the physical properties of the materials84. MRI utilizes superconducting magnets to create an extremely high magnetic field. This field causes aligned atomic nuclei precession, or second-order wobble. The nuclei are rotated slightly in the presence of a small RF pulse, and their relaxation back to their aligned state is observed as a small change in the magnetic field. This change translates into contrast and allows the visualization of different tissues as each material has unique relaxation properties.

An imaging method that has gained significant attention recently in radiation therapy is positron emission tomography (PET). In the 1950s physicists learned to image the annihilation energy released in positron decay through coincidence detection. This brought about a new generation of images showing metabolic data. In PET imaging, positron-emitting radionuclides such as 11C, 13N, 15O, or 18F take part in the metabolic process84. This causes radioactivity to localize around areas of higher metabolic activity. A ring of detectors count annihilations by capturing two coincident 511 keV photons produced from the positron annihilation process.

Each of these modalities plays a different important role in visualizing human anatomy. Typically, MRI is used heavily in imaging neuroanatomy, the area of focus for the purpose of this research (see Section ‎7.2 for specific details). The shape methods presented in this work are capable of analyzing any medical image modality including 2D, 3D, and 4D scans. It is also possible to combine multiple sources of data into any number of dimensions using the methods in this work, assuming ideal registration. However, these applications are not addressed at this time.

NUMERICAL METHOD

The numerical method for the shape metric is now reviewed. PDE methods are used because of their many useful and broad implications. The PDE used in this research is known as Poisson’s equation. Problems that utilize Poisson’s equation are often solving a finite difference process on the interior of an object. Gorelick et al. have suggested a number of features that can be extracted from Poisson’s equation that include extremity maps, concavity/convexity detection, and skeletonization2, 6. We are interested in the intrinsic makeup of an object’s interior making this particular PDE an excellent match for our needs.

1 POISSON’S EQUATION

Poisson’s equation is a well understood second-order partial differential equation. Its most common use is in the field of electrostatics where it can be derived directly from the first Maxwell equation. Taking the electrical field under a steady state, the first Maxwell equation gives

[pic]. (Eq. ‎2-1)

The electrical field, E, can be expressed as

[pic]. (Eq. ‎2-2)

By substitution, this yields the electrostatic form of the Poisson equation

[pic] (Eq. ‎2-3)

where ρ is the charge density and ε is the permittivity of free space. For three dimensional spaces, left-hand side can be further expanded to

[pic]. (Eq. ‎2-4)

The Laplace operator can be used in this form

[pic], (Eq. ‎2-5)

[pic]. (Eq. ‎2-6)

This equation can be used to represent more than just electrostatics by generalizing the function and using the form

[pic]. (Eq. ‎2-7)

This is the most basic form of Poisson’s equation. This form is often extracted into a functional representation with the components separated

[pic], (Eq. ‎2-8)

where f is a function evaluated at a particular x, y, and z coordinate within the interior of an object. Laplace’s equation is similar in nature, but in this method the function f is evaluated at zero

[pic], (Eq. ‎2-9)

[pic]. (Eq. ‎2-10)

Laplace’s equation is particularly useful for evaluating problems that occur on the exterior of a boundary or in the space between two changing boundaries. In contrast, Poisson’s equation is used on the interior of a single-boundary object.

2 FINITE DIFFERENCE METHOD

In this work, the solution is sought to Poisson’s equation on a finite grid within an object. An object may be either two or three dimensions. The most efficient way to solve Poisson’s equation, also known as the potential, is to use a finite differencing method47-50. Finite differences in one dimension are generalized by the change in a function over a given interval

[pic]. (Eq. ‎2-11)

More specifically, it is possible to tailor the finite difference between two points by either using forward, backward, or central differences. The method of differencing specifies where a segment begins and ends. In the forward difference with grid spacing h, the first point is offset

[pic]. (Eq. ‎2-12)

The backward difference method offsets the second point

[pic]. (Eq. ‎2-13)

Central differencing (Figure ‎2.1) finds the middle ground between the two points

[pic]. (Eq. ‎2-14)

[pic]

Figure ‎2.1: Central differences look at the difference in the central node.

If the concept of central differences is applied to Poisson’s equation on a finite grid, the components uxx, uyy, and uzz can be approximated by

[pic] (Eq. ‎2-15)

Notice that a grid point is used as the central location between two of the central point’s neighbors. Substituting these equations back into the general form for Poisson’s equation yields

[pic] (Eq. ‎2-16)

[pic] (Eq. ‎2-17)

For this work, Dirichlet boundary conditions [pic] on the surface of a shape’s boundary will be used.

3 ITERATIVE TECHNIQUES FOR SOLVING POISSON’S EQUATION WITH FINITE DIFFERENCES

Solving Poisson’s equation for large datasets, such as those extrapolated from medical images, requires the use of iterative methods. Iterative techniques raster through the entire dataset within the domain of interest (u) for each iteration.

1 JACOBI’S METHOD

The most basic and time-consuming iterative method for solving Poisson’s equation is known as Jacobi’s method. This method handles the dataset at fixed iterations allowing only data from the previous iteration to effect the new approximation to Poisson’s equation. The three dimensional notation for this method is

[pic] (Eq. ‎2-18)

where k is the iteration number, k = 0,1,2,3,…. This can be written in shorthand notation as

[pic]. (Eq. ‎2-19)

The term 4/h2 is set to 1 since a standardized spatial grid will be used. It is easy to see that an average of a neighborhood of grid values is being taken when Jacobi’s method is extracted into this form. Figure ‎2.2 shows a simple example of this method on a 3x3 grid. In this example, it is assumed that the Dirichlet boundary conditions limit the value to zero outside of the boundary of the square. Grid points inside the square are initialized to the value one (1). The example in Figure ‎2.2 rapidly converges to a steady state due to its small size. The convergence rate can be increased by a simple modification of this method to use the most recently solved values for u. This approach is known as the Gauss-Seidel iterative method (Section ‎2.3.2).

| | | |

|1 |1.5 |1.88 |

| | | |

|1 |1.75 |2.25 |

| | | |

|1 |1.5 |1.88 |

| | | |

| | | |

|1 |1.75 |2.25 |

| | | |

|1 |2 |2.75 |

| | | |

|1 |1.75 |2.25 |

| | | |

| | | |

|1 |1.5 |1.88 |

| | | |

|1 |1.75 |2.25 |

| | | |

|1 |1.5 |1.88 |

| | | |

|Jacobi Iteration 0 |Jacobi Iteration 1 |Jacobi Iteration 2 |

| | | |

|2.13 |2.13 |2.31 |

| | | |

|2.63 |Jacobi Iteration 3 |2.88 |

| | | |

|2.13 | |2.31 |

| | | |

| | | |

|2.63 | |2.88 |

| | | |

|3.25 | |3.63 |

| | | |

|2.63 | |2.88 |

| | | |

| | | |

|2.13 | |2.31 |

| | | |

|2.63 | |2.88 |

| | | |

| | |2.31 |

| | | |

| | |Jacobi Iteration 4 |

| | | |

|2.53 |2.59 |2.75 |

| | | |

|3.19 |3.28 |3.5 |

| | | |

|2.53 |2.59 |2.75 |

| | | |

| | | |

|3.19 |3.28 |3.5 |

| | | |

|4.06 |4.19 |4.5 |

| | | |

|3.19 |3.28 |3.5 |

| | | |

| | | |

|2.53 |2.59 |2.75 |

| | | |

|3.19 |3.28 |3.5 |

| | | |

|2.53 |2.59 |2.75 |

| | | |

|Jacobi Iteration 6 |Jacobi Iteration 7 |Jacobi Iteration 50 |

| | |(Steady State) |

Figure ‎2.2: A simple example of the Jacobi method for iteratively solving Poisson’s equation for a 3x3 grid with zeros on the boundary.

2 THE GAUSS-SEIDEL METHOD

The Jacobi method can be easily modified to speed convergence of the approximation of Poisson’s equation. Instead of using only values from the previous iteration, values from the current iteration that have already been found are included in the computation. For this method, the shorthand equation for Jacobi’s method can be rewritten as

[pic]. (Eq. ‎2-20)

The components in the backward direction (i,j,k-1) from the current iteration (k+1) are used to speed convergence. Their values are computed in an earlier computation within the same iteration. Thus, a faster approach to the steady-state solution is expected. It is important to note that order becomes an important factor when using the Gauss-Seidel method as opposed to the Jacobi method. If grid points are visited at random, little improvement will be seen. Figure ‎2.3 shows a simple example of the Gauss-Seidel method for finding the solution to Poisson’s equation in a 3x3 grid of nodes. Note that at iteration 7 the values are very close to the steady state values.

A comparison of Figure ‎2.2 and Figure ‎2.3 reveals that the Gauss-Seidel method does indeed converge to the solution faster than the Jacobi method. The total difference (measured as the L2 norm in reference to the steady state solution) for the Jacobi method at iteration 7 is 1.83. The Gauss-Seidel method has a difference of 0.48 at iteration 7, a smaller and more desirable result. It has been shown that convergence of the Gauss-Seidel method is nearly twice as fast compared to using the simple Jacobi method48.

| | | |

|1 |1.5 |1.94 |

| | | |

|1 |1.88 |2.52 |

| | | |

|1 |1.72 |2.20 |

| | | |

| | | |

|1 |1.88 |2.52 |

| | | |

|1 |2.44 |3.41 |

| | | |

|1 |2.29 |2.94 |

| | | |

| | | |

|1 |1.72 |2.20 |

| | | |

|1 |2.29 |2.94 |

| | | |

|1 |2.14 |2.47 |

| | | |

|Gauss-Seidel Iteration 0 |Gauss-Seidel Iteration 1 |Gauss-Seidel Iteration 2 |

| | | |

|2.26 |2.48 |2.61 |

| | | |

|2.97 |3.23 |3.36 |

| | | |

|2.48 |2.61 |2.68 |

| | | |

| | | |

|2.97 |3.23 |3.36 |

| | | |

|3.95 |4.23 |4.36 |

| | | |

|3.22 |3.36 |3.43 |

| | | |

| | | |

|2.48 |2.61 |2.68 |

| | | |

|3.22 |3.36 |3.43 |

| | | |

|2.61 |2.68 |2.76 |

| | | |

|Gauss-Seidel Iteration 3 |Gauss-Seidel Iteration 4 |Gauss-Seidel Iteration 5 |

| | | |

|2.68 |2.72 |2.75 |

| | | |

|3.43 |3.47 |3.5 |

| | | |

|2.72 |2.73 |2.75 |

| | | |

| | | |

|3.43 |3.47 |3.5 |

| | | |

|4.43 |4.47 |4.5 |

| | | |

|3.47 |3.48 |3.5 |

| | | |

| | | |

|2.72 |2.73 |2.75 |

| | | |

|3.47 |3.48 |3.5 |

| | | |

|2.73 |2.47 |2.75 |

| | | |

|Gauss-Seidel Iteration 6 |Gauss-Seidel Iteration 7 |Gauss-Seidel Iteration 50 |

| | |(Steady State) |

Figure ‎2.3: A simple example of the Gauss-Seidel method for iteratively solving Poisson’s equation for a 3x3 grid with zeros on the boundary.

3 ERROR AND PROCESSING TIME

As an iterative method, the finite differencing scheme yields a reduction in error at each iteration. Error in the approximation at any given iteration is defined as

[pic]. (Eq. ‎2-21)

As such, the Jacobi method (used here as a baseline for runtime computation) experiences a reduction in error by a factor of

[pic] (Eq. ‎2-22)

for each iteration. The term n is given by the dimensions (or grid points) in the problem. For cases where we have a large n, as encountered in this work, this term can be approximated as

[pic]. (Eq. ‎2-23)

Notice that this term approaches 1 as n goes to infinity. This can be used to estimate the change in error for k iterations. For example, say one desires to decrease the error by a factor of 10, the number of iterations can be estimated by

[pic] (Eq. ‎2-24)

[pic]. (Eq. ‎2-25)

When n becomes large, this equation becomes

[pic], (Eq. ‎2-26)

which yields an iteration approximation of

For n = 1,000, this equation results in approximately 182,740 iterations needed to reduce the error by a factor of 10. This yields a runtime generalization of

Processing time = number of steps * cost per step (Eq. ‎2-27)

= O (n) * O (n)

= O (n2)

The Gauss-Seidel method is capable of reducing the number of compute cycles until acceptable convergence by a factor of two, but this still requires a number of computations proportional to O(n2). It is possible to achieve Poisson solutions in less time using further advances such as Successive Overrelaxation (SOR) or Fast Fourier Transform (FFT) based methods, but time savings are only marginal in this case. These other methods are often explored in parallel environments or for maximum computational efficiency47-48. Such goals are outside of the scope of this project. The goal is to solve Poisson’s equation in the quickest manner that can easily be achieved in many different computational environments. Development related to this work has focused on the Gauss-Seidel method for this reason.

4 CONVERGENCE

A wide variety of methods have been suggested to check for the convergence of the iterative methods used to solve Poisson’s equation. In this work, the normalized slope of the total field energy over all grid points is used:

[pic]. (Eq. ‎2-28)

The normalized slope of this equation is given as

[pic]. (Eq. ‎2-29)

If the normalization value drops by less than 10-10, the iteration loop is stopped. This threshold is used because our experimentation shows that it maintains a reasonable tradeoff between processing time and error for a variety of solution spaces.

SHAPE CHARACTERIZATION

When the term “shape” is discussed, it is vital to understand exactly what is being inferred. By definition, shape is a measure of geometry, or even more generally, a measure of the spatial relationship of points, objects, nodes, etc. Thus, the terms shape and geometry are nearly interchangeable. Therefore, the first and possibly most important task in this work is to clearly define the term “shape.” The principal property observed about shape is its independence from a given environment as discussed in Section ‎3.1. Often, other neighboring shapes can help formulate a decision about the nature of a particular shape in question, such as inferring a structure’s identity on a medical image by the presence of adjacent, identifiable anatomy. However, when discussing an object’s shape, one is describing the intrinsic geometry that defines the object, in an independent fashion.

In medicine, shape is frequently thought of as a spatial arrangement of structures or anatomy that are distinct from surrounding tissues or organs. The shape of interest could be a normal anatomical feature or an abnormal lesion such as a GBM. Without a metric for shape, quantifying this structure can only be accomplished by simple measurements or a highly trained observer. In the case of volume measurements, assuming ideal segmentation allows a reliable measure. However, in the case of a clinician providing some sort of observation, a high degree of inter-observer error may occur in addition to annotations that are very difficult to measure. Such observations are based purely on clinical training, experience, and sometimes just a loose expert opinion. Thus, it is important to provide reliable metrics that are stable and inter- and intra-observer independent.

For this work, shape is defined only by the boundary of the object. It is assumed that the boundary of the object is finite, non-continuous, and smooth for all points in R3. Such formalism is commonplace for PDE-based methods.

1 TRANSLATION, ROTATION, AND SCALE INDEPENDENCE

Consider the visual recognition of a car. Environmental cues such as a road or garage would help in the identification process. But even when a car shape is found an unexpected environment, say inside a building, humans still readily recognize the geometry. Further, if the car is pointed north or south, or is flipped upside-down, it is still a car shape. This environmental independence can be abstracted another step into properties of transformation. Consider a small toy car in a child’s toy chest or a life-sized car on the street. Both would be classified as cars because the common appearance is easily recognized regardless of size. Similarly, if one saw a car silhouette on a road sign, or if someone sketched a basic outline of a car on a piece of paper, the shape would still be familiar. Thus, it can be said that its shape is discernable regardless of its size or environment. If this is extracted into general properties of transformation, it can be said that shape is a measure of geometry that is scale, translation, and rotation independent. Shape metrics often require proper accounting for these changes, but the basis of geometry is that shape is independent of these factors. Such assumptions are common in medicine: when a physician examines a head MR or CT scan, an organ like the brain is easily recognized regardless of the relative size of the brain or its orientation in the image (see Figure ‎3.1, which illustrates a surface rendering of the brain).

[pic]

Figure ‎3.1: Shape is scale, translation, and rotation independent.

It is important to note that the boundary of a shape must be finite, as stated before, when using the methods presented in this work. At the present time, this method does not allow for boundaries that are probabilistic or “fuzzy” by nature. Further work could be done to allow for probabilistic boundaries, but this is outside of the current scope of this research. It is also assumed that ideal segmentation and/or registration has taken place prior to these methods being employed.

2 LEVEL-SET COMPLEXITY

The basis of the shape metric used in this work is examined in this section. Second-order finite differences as presented in Section ‎2.2 outline the approximation scheme for Poisson’s equation. The iterative finite-difference method produces the solution which can be referred to as the potential, as seen for a two-dimensional square in Figure ‎3.2. One interesting property of these solutions is that they can be calculated at sub-pixel resolution, thus, producing a smooth result even if the original image has coarse spatial resolution.

[pic]

Figure ‎3.2: The solution to Poisson’s equation for the interior of a square.

[pic]

Figure ‎3.3: The level sets of the solution to Poisson’s equation.

Close examination of Figure ‎3.3 reveals that the potential is a smoothing function of the interior of the square. That is, the level sets of the solution (outlined in black in Figure ‎3.3) become more circular by nature as the function approaches the global maximum or local maxima. This property can be expressed in an equation as

smoothness = f(potential). (Eq. ‎3-1)

Haidar et al. have proposed that this property creates a feature vector unique to any given shape3. Specifically, it is postulated that the rate of complexity (or smoothness) change of the potential is characteristic of the shape. For example, a square, such as the one seen in Figure ‎3.3, will have a completely different rate of complexity change compared to an oval.

3 DISPLACEMENT MAP

One method to quantify complexity, or smoothness, of a given level set is to measure its variance on the boundary, as proposed by Haidar et al3. The first step in this process is to map the displacement of any given point within the solution. This map is similar to the Euclidean distance, except that it accounts for the curvature of the surface of the potential. This is also known as the geodesic distance. Consider the path that an airplane flies when traveling from New York City to London. Flight will occur in an arc that is caused by the round shape of Earth (Figure ‎3.4). When computing this travel distance, it is necessary to include the extra distance traveled from the arc-shaped path. The displacement map computes such distances for a given solution to Poisson’s equation. From our flight example, the pathway the plane travels is called a streamline.

[pic]

Figure ‎3.4: The arc plane flies due to the earth’s round shape is analogous to a streamline for computing geodesic distance.

An infinite number of streamlines construct the surface of a displacement map. Another way to think of a streamline is to follow the path a drop of water would take under the force of gravity on the surface of the potential. A reverse summation of the distance along the path a large number of water drops would take gives the displacement map. Figure ‎3.5 shows an example of a displacement map for an isometric square. Note that this map looks similar to a map of Euclidean distance. The notable difference is the pinching of the values toward the center. This is caused by the curvature from the Poisson solution as seen in Figure ‎3.2.

[pic]

Figure ‎3.5: Displacement map of a two-dimensional square where the maximum value is given by dark red and the minimum value is given by dark blue.

[pic]

Figure ‎3.6: Streamlines (black arrows) along the level sets of the potential for an arbitrary object.

The method employed for finding displacement maps was proposed by Yezzi et al. for computing thickness of the cardiac wall7. In their work, they require the use of both an up and downwinding scheme. The purpose of such an approach is to find displacement along streamlines between two interfaces, namely the inner and outer wall of the heart. The work presented here requires a computation from only one interface. The goal of this step is to solve displacement on the interior of an object boundary. Haidar et al. suggests the use of the upwinding scheme from the mathematical approach of Yezzi et al. to accomplish this task3, 7.

To compute the displacement, the normalized gradient of the solution to Poisson’s equation must be computed first. The normalized gradient yields a field of vectors of unit length pointing toward local maxima or one global maximum (shown in Figure ‎3.7).

[pic]

Figure ‎3.7: Plot of the normalized gradient for a 5x5 2D square grid. Vectors point toward the maximum.

These vectors give us the directions that streamlines follow on the surface of the potential (in discrete form). The mathematical representation of this is

normalized gradient = [pic]. (Eq. ‎3-2)

This is extracted into three separate mappings of the gradient with each map containing the contribution from only one dimension. This gives the variables Tx, Ty, and Tz. From this point on, the derivations are for three-dimensional data, since this approach is used in this work. Solving for the T-maps in three dimensions gives

[pic] (Eq. ‎3-3)

These equations yield two important parameters for computing displacement: field direction and magnitude.

The displacement can now be constructed by an iterative computation that follows the streamline direction to calculate the geodesic distance. To start the process all points on the map are initialized to zero.

[pic] (Eq. ‎3-4)

The algorithm then iterates through all points on the object’s interior using the equation

[pic]. (Eq. ‎3-5)

The direction that is used to compile displacement is dictated by the upwinding scheme

[pic] (Eq. ‎3-6)

It should also be noted that Δx, Δy, and Δz are set to zero if the corresponding T value is zero. Each iteration of Dk+1 slowly computes the distance from the boundary toward the inside of the object, similar to finite differencing schemes. This process is able to utilize a simple Gauss-Seidel traversal to speed convergence much like the iterative method for solving Poisson’s equation7. Values of x, y, and z are taken directly from the current iteration, but once again, the order of traversal is important.

Convergence is tested by a simple difference test

[pic] (Eq. ‎3-7)

where ε is a threshold value. This equation halts the iterative process when the maximum change of one grid point in the entire object is less than 10-5, our value for ε. Using this value, a stable solution is usually computed within a few hundred iterations for both two- and three-dimensional objects.

4 SHAPE CHARACTERISTIC CURVE

The final step in the mathematical process is finding the shape characteristic of the object. In this step the data from the displacement map and the solution to Poisson’s equation are combined to form a unique 2D characteristic curve that can represent a shape of any number of dimensions. As stated before, the interesting feature defined by Haidar et al. is the change in complexity of the level-sets of the potential. Stated another way, the rate that the object’s interior becomes smooth provides a feature vector that is thought to be unique for all shapes. To measure complexity, Haidar et al. suggests using the coefficient of variance3.

The coefficient of variance measures the complexity of a given level-set using simple statistics. The strength of this approach is that the output is very easy to interpret. To remove any size dependencies, the potential map φ is normalized before beginning characterization. A series of level-sets of the potential are then created based on the precision of the sampling desired for the shape characteristic curves. If implemented correctly, the sample rate should not affect the individual sample values. Haidar et al. suggest that the curves can be analyzed with discretization as sparse as 0.1 units of the potential map assuming the map is normalized with values between zero and one, however, their work only shows values between approximately 0.15 and 0.7 3. Therefore, the behavior of shape characteristic curves outside of this range is undocumented. For each level-set, the boundary is identified. The values from the displacement map at the same location as this boundary become the surface D(Si). These values are plugged into the equation for the coefficient of variance

[pic]. (Eq. ‎3-8)

This measure produces a graph which is easily interpreted (see Figure ‎3.8). Theoretically, a single point is infinitely smooth. Thus, in general, the value for complexity should go down as the curve approaches the global maximum (a single point). This property can be described as a monotonically decreasing curve. The curves all converge at an equipotential value of 1, at the global maximum grid point. Analytically, this convergence occurs because the standard deviation of one value is zero.

[pic]

Figure ‎3.8: A basic shape characteristic curve of a square.

A similar phenomenon is observed for an equipotential of zero. This surface, Si, sits on the boundary of the object. Theoretically, the space directly adjacent to the boundary (the first interior grid points) has nearly the same displacement across the surface of the object. If the problem was a continuous one instead of discrete, the shape characteristic curve would drop to zero as the boundary is approached. Because the grid is discretized, the value at the boundary is slightly higher than zero. Additionally, this discrete error can affect the neighborhood around the boundary, as is the case with the discrete circle and oval (see Figure ‎3.9). We refer to these undocumented phenomena as the boundary effect.

Both of these phenomena: convergence of all curves to zero at the global maximum and non-zero values at the boundary, have not been reported by Haidar et al or others3. In their work, regions below 0.15 or above 0.7 are discarded. Their approach maintains a monotonically decreasing function such as they have described. In reality, the boundary effect previously described in this work proves that this function does not decrease monotonically from zero (boundary) to one (global maximum). Further, it is shown that multi-figural shapes demonstrate non-monotonic properties (Section ‎6.3). Thus, new properties of the shape characteristic curve have been found.

Figure ‎3.9 shows the shape characteristic plots for several common two-dimensional geometric objects. Further analysis of these objects is conducted in Section ‎6.2.

[pic]

Figure ‎3.9: Shape characteristic curves for four basic geometric shapes.

STATISTICAL FRAMEWORK

It is necessary to measure the significance of a family of shape characteristic curves in order to assess the validity of the metric and use it for comparison of shape. Typically, related data cluster about some point in space in a statistically predictable manner. There are many types of data distributions, the most common of which is the normal, or Gaussian, distribution67. Without a predictable distribution such as a Gaussian, classic statistical methods fall short. Another compounding factor is the level of complexity of the feature, which, at this time, is still not fully understood. Comparative measures used commonly in oncology, and many other disciplines in medicine, use simple parameters that are often reduced to a single measurement. In the case of tumor geometry, that measurement is frequently volumetric. Recently, more sophisticated measures have become popular such as the conformity index, but again, the basis of this metric is purely volumetric64. The method used in this work applies a higher level of sophistication to produce a complex feature vector. This shape characteristic curve is highly nonlinear by nature, thus, nothing can be assumed about its distribution. It is important to note that the curve must be analyzed over its entire length to access all of the shape information.

1 PERMUTATION TESTS

To address problems of similar nonlinearity and complexity in medical imaging, Holmes et al. introduced a method in 1996 that draws on previous work by many others dating back to the 1930s on nonparametric tests of randomization within populations. Such methods were sought to remove the difficulties associated with the numerous assumptions and restrictions of using data without known distributions66. Statistical permutation tests, as they are commonly known, are a class of hypothesis testing methods that rigorously inspect a given result to decipher whether or not an observation is an artifact of pure chance. Hypothesis testing assumes that data from each class are formed with statistical distributions that are the identical. However, nothing further is assumed about the distribution of the data in the model.

The purpose of hypothesis testing is to assess the validity of a null hypothesis. A null hypothesis is a statement that you wish to reject with a strong statistical significance. For example, a simple null hypothesis for comparing the weight of cars versus trucks would be: “There is no difference between the between the weight of cars and trucks.” If the p-value of the permutation test is statistically significant (< 0.05 is typical threshold), the null hypothesis is rejected, but no other assumptions should be made about the data. An insignificant p-value assumes that the data were placed into classes randomly and that re-randomization will assign classes with the same amount of randomization of the baseline measure68.

The minimum number of randomization tests (not to be confused with features or number of data) that can be used in a permutation test relies directly on the p-value needed for significance. A p-value less than or equal to 0.05 is widely accepted as statistically significant and thus constrains the amount of data required for running a standard permutation test. Stated another way, the p-value is the maximum acceptable probability of a false positive result. If there are n possible groupings of data, the minimum p-value that can be achieved is 1/n. Matching this back to the desired p-value we have p = m/n, or 0.05 = 1/n. Consequently, the minimum number of tests required for statistical significance is 20.

2 RANDOM PERMUTATION TESTS

Permutation tests work by testing all of the possible combinations of grouping data into classes. However, it is a common occurrence that too many combinations exist for a feasible test of the data. In this case, a Monte-Carlo permutation test (also known as a random permutation tests) is used. In Monte-Carlo permutation tests a subset of the possible combinations are tested. It is assumed that an approximate distribution can be accurately modeled by an acceptable number of permutations. Edgington et al. report that the number of permutations can be as low as 1,000 for acceptable results69. However, at least 10,000 permutations are standard for a more robust approximation68. Due to the large number of samples required to generate clinical significance of our data, the Monte-Carlo permutation test is used in this work. Each test uses at least 10,000 permutations, and most in this work use at least 100,000.

The generalized Monte-Carlo permutation test for a two-class set of data with classes A and B consists of the following steps:

1. Gather the test data

2. Compute the test statistic for each class

3. Find a defining statistical measure to determine the experimental effect (often the mean is used) and compute as a baseline metric

4. Re-group data randomly into groups maintaining original group sizes

5. If the defining statistical measure is better than the baseline measure in step 3, mark as a failure

6. Repeat numerous times (at least 104)

7. Compute the p-value as a ratio of failures to permutations (p = failures/permutations)

The significance of the test is based on the distribution of permutation results. If the baseline measure is better than 95% of the results, the grouping is considered statistically significant. Thus, the false positive rate is directly extracted from the permutation test in the form of the p-value.

3 SHAPE CHARACTERISTIC MONTE-CARLO PERMUTATION TEST

The Monte-Carlo permutation test used specifically in this study is now reviewed. For ease of discussion, population groupings will be arbitrarily referred to as groups A and B in this section. First, the generalized null hypothesis for this study is as follows: “There is no difference in the mean shape characteristic curves between the objects in group A compared to the objects in group B.” Second, the test statistic used is the L2 norm of the mean shape characteristic curve as suggested by Haidar et al3. This statistic looks at the difference of the two curves as a measure of distance. Curves with significantly greater differences (and/or separation) in their values will have a larger resulting L2 norm (desirable for the baseline measure, D0). Conversely, curves with statistically insignificant values comparatively will have nearly the same mean curve and, subsequently, a small difference.

In all of our shape-based permutation tests, the following steps are performed:

1. Discretize the shape characteristic curves at a constant interval

2. Group the curves into two groups, A and B, based on the hypothesis being tested

3. Compute the mean of the shape characteristic curves for each group

4. Compute the L2 norm vector of the group means

5. Record as true (baseline) difference, D0

6. Randomly assign shapes to groups A or B

7. Repeat discretization, means, and difference computation (steps 1-4)

8. If the difference is greater (step 4), meaning there is more separation of the curves based on that grouping, mark as a failure

9. Repeat a minimum of 10,000 times

10. Compute the p-value as a ratio of failures to permutations (p = failures/permutations)

4 CLINICAL EXAMPLE: NORMAL NEUROANATOMY

An initial study of the shape characteristic curve and permutation test was conducted on simple structures of the brain with clear shape similarities and differences. Data were collected from the Internet Brain Segmentation Repository (IBSR) on 18 subjects with MRI scans of the brain. More info on the IBSR is given in Section 6.5.

Two structures with clearly different shapes, the hippocampus and the brainstem, were selected to verify the rejection of the null hypothesis. For this test our null hypothesis is “There is no difference in the mean shape characteristic curves of the right hippocampus and the brainstem of 18 normal subjects.” Segmentations were provided with the MRI data from the IBSR. The segmentations of the right hippocampus and brainstem were both separately converted into 3D isotropic binary datasets. The voxels were resliced and upsampled to achieve isotropic dimensions. The results for this study of clearly different shapes resulted in the successful rejection of the null hypothesis. The shape characteristic curves for these groups can be seen in Figure ‎4.1 where red curves are associated with the right hippocampus and the blue curves are associated with the brainstem of each subject. The baseline means of the two groups are shown in Figure ‎4.2.

[pic]

Figure ‎4.1: The shape characteristic curves of the right hippocampus (red) and brainstem (blue) for 18 normal subjects.

[pic]

Figure ‎4.2: The baseline mean of the right hippocampus (red) and brainstem (blue) for 18 normal subjects.

Monte-Carlo permutation testing was performed with 10,000 random groupings of the data. The results of the difference (L2 norm) metric are plotted in the histogram in Figure ‎4.3. The baseline difference, D0, is 0.127. The computed p-value of 0.0002 allows the rejection of the null hypothesis. Thus, the brainstem and hippocampus shape characteristic curves are different, as are the brainstem and hippocampus shapes.

[pic]

Figure ‎4.3: The histogram of the permutation test of the difference between the right hippocampus and the brainstem.

The next study examined two shapes with extremely similar geometries: the left and right hippocampus. For this test our null hypothesis is “There is no difference in the mean shape characteristic curves of the right and left hippocampus of 18 normal subjects.” The data were acquired in the same manner as the previous study in this section. Figure ‎4.4 - Figure ‎4.6 show the results from this study. The baseline D0 for this study is 0.0324. The p-value of 0.102 provides an unacceptable statistical significance, thus proving the null hypothesis as expected for such structures.

[pic]

Figure ‎4.4: The shape characteristic curves of the right (red) and left (blue) hippocampus for 18 normal subjects.

[pic]

Figure ‎4.5: The baseline mean shape characteristic curves of the right hippocampus (red) and left hippocampus (blue) for 18 normal subjects.

[pic]

Figure ‎4.6: The histogram of the permutation test of the difference between the right and left hippocampus for 18 normal subjects. D0 = 0.0324.

An error-checking study was conducted to check the software for problems. The right hippocampus was tested against itself to ensure that there is no statistically significant result for such a situation. For this test our null hypothesis is “There is no difference in the mean shape characteristic curves of the right hippocampus of 18 normal subjects as compared to itself.” The data were acquired in the same manner as the previous studies in this section. Figure ‎4.7 and Figure ‎4.8 show the results from this study. The baseline D0 for this study is zero since there is no difference in the L2 norm. A p-value equal to 1 provides an unacceptable statistical significance, thus proving the null hypothesis as expected when comparing identical structures.

[pic]

Figure ‎4.7: The shape characteristic curves of the right hippocampus for 18 normal subjects.

[pic]

Figure ‎4.8: The histogram of the permutation test of the difference between the right hippocampus for 18 normal subjects. D0 = 0, thus, the entire distribution is greater than the baseline verifying the null hypothesis as expected.

Finally, two structures with a slightly larger variance in shape compared to the previous anatomical structures but clearly distinct geometries from each other were selected: the 3rd and 4th ventricles. For this test our null hypothesis is “There is no difference in the mean shape characteristic curves of the 3rd and 4th ventricles of 18 normal subjects.” The data were acquired in the same manner as the previous studies in this section. Figure ‎4.9 – Figure ‎4.11 shows the results from this study. The baseline D0 is 0.104. The resulting p-value is 0.009, allowing the rejection of the null hypothesis as expected: the 3rd and 4th ventricles have statistically different shapes.

[pic]

Figure ‎4.9: The shape characteristic curves of the 3rd ventricle (red) and 4th ventricle (blue) for 18 normal subjects.

[pic]

Figure ‎4.10: The baseline mean shape characteristic curves of the 3rd ventricle (red) and 4th ventricle (blue) for 18 normal subjects.

[pic]

Figure ‎4.11: The histogram of the permutation test of the difference between the 3rd and 4th ventricles for 18 normal subjects. D0 = 0.104.

A summary of the results for this set of studies can be found in Table 1.

|Brain Structure 1 |Brain Structure 2 |p-value |

|R. hippocampus |Brainstem |0.0002 |

|R. hippocampus |L. hippocampus |0.102 |

|R. hippocampus |R. hippocampus |1.000 |

|3rd ventricle |4th ventricle |0.009 |

Table 1: A summary of the Monte-Carlo permutation test brain studies.

SOFTWARE TOOLSET

A robust software toolset was created to apply Poisson-based shape metrics to anatomic images and establish the statistical significance of the resulting data. The toolset is capable of working with a wide array of data ranging from simple silhouette images to full 3-D volume scans acquired from clinical imaging devices. Documentation of the software can be found in Appendix A. Code for the important functions and scripts used in this dissertation is found in Appendix B.

1 IMPLEMENTATION

The software development process began with identifying the most efficient and accurate way to compute the solution to Poisson’s equation. Both finite element and finite difference methods were explored. After evaluating both methods, a finite differencing method was chosen as it is much more efficient than the finite element method. The mathematical basis for this method is presented in Section ‎2.2. Finite differences allow any size and number of irregular shapes within one image to be solved simultaneously without a great deal of complexity as seen when using finite elements. Multiple structures in one image are analyzed together in the computation of the shape characteristic curve. This makes more intuitive sense because tumors often break apart, and this must be accounted for in the method. A more thorough review of this can be found in Section ‎6.3.

Poisson’s equation is not quickly solved for large grids by a computer system. The reason is that the algorithm relies on iterative methods that must perform a large number of floating point operations per iteration. In a dataset such as a 512 x 512 voxel MRI scan with 400 slices, the data structure to store the evolving solution to Poisson’s equation requires approximate 840 MB for double precision. This one piece of memory stores only the potential and does not include the other pieces of information such as the gradient component T-maps (x3), binary map, and displacement map. This puts the total memory required for computation at over 4 GB. A typical desktop computer does not have enough memory to accomplish this without the use of large page files that cache the contents of the system memory in situations where an overflow of memory occurs. To deal with these memory issues, single precision (32-bit) is used. Additionally, the original binary map is trimmed to include the smallest possible dimensions around the area of interest as well as padding (two-pixels) around the object to ensure Dirichlet boundary conditions. The iterative methods in this work all extrapolate the coordinates into a condensed list of points within the boundary of the object rather than creating many large blocks of memory to store unnecessary intermediate grids for iterating through all x, y, and z locations. This idea is similar to, yet slightly different than, using sparse matrices.

After full testing in two dimensions, a new program was then created to work with a three-dimensional dataset. Solving in higher order dimensions is theoretically similar to two dimensions. In fact, the solutions to Poisson’s equation, displacement, and the shape characteristic curve can be computed in any number of dimensions. Because of this, the shape characteristic could also be useful in analyzing higher dimensional datasets such as 4D lung scans or even combinations of 3D scans from a variety of modalities such as PET or CBCT. Working with higher dimensional data has not yet been studied with this method at this time.

It should be noted that extending the toolset to three dimensions required an additional four hours of processing time (~3,000 Poisson iterations, 200 displacement iterations) for a simple square (200 x 200 x 200 pixels) due to the expanded space that the iterative methods have to traverse. However, the anatomic structures of the brain are usually much smaller. A structure as the hippocampus requires less than 20 seconds of processing time. One possible solution is to solve Poisson’s equation for a series of 2D slices in each dimension to increase computational efficiency4. However, in-house testing has shown little time savings using this method. In nearly all test cases, the series approach took longer to complete and produced a less accurate result.

2 FRAMEWORK AND NOTEABLE ALGORITHMS

The framework for finding shape characteristic curves and establishing statistical significance via Monte-Carlo permutation testing within the software toolset is now reviewed. All code is run within the MATLAB environment. This code was developed on version R2006a. An important naming convention should be noted within the code directory. All scripts that begin with “sintay_” are modular functions that represent the core of the toolset. Experimental “inline” code used to generate phantoms, run the studies for this research, and analyze perspective ideas is preceded by “experiment_”. These pieces of code are not functions, and many begin with statements such as “clear all” and “close all,” thus, care should be taken when running “experiment” scripts. When possible, built-in functions were used to speed computation time and increase long-term compatibility.

The process for running a basic study has three steps:

1. Import data into MATLAB

2. Convert to a binary map where 0’s are background and 1’s represent the object for analysis

3. Run the Poisson Shape Characteristic (PSC) program of choice (2D or 3D)

When running a shape comparison study this is further expanded to include these additional steps:

4. Repeat for all subjects (or objects) under study

5. Group objects according to the null hypothesis into two arrays

6. Run the permutation test function on the arrays, specifying the number of permutations (at least 1,000 for an initial test and 10,000 for a thorough analysis)

7. Plot and compare

Within the Poisson Shape Characteristic program of choice, the general algorithm for finding shape characteristic curves is as follows:

1. Trim unnecessary background regions from the input image to reduce the size of the dataset and enhance processing time

2. Compute the solution to Poisson’s equation using finite differences

3. Compute the gradient of the solution in step 2

4. Compute the T-maps of the gradient

5. Compute the displacement map

6. Compute the shape characteristic curves

It is also vital to the shape characteristic curve that a stable convention is followed in order to produce a consistent result. Level-set boundaries can be defined in many ways, and even a slight deviation in defining the boundary points can lead to a metric that does not observe the same property of complexity of the solution to Poisson’s equation. In fact, this was one of the most difficult parts of the programming process to perfect. Failure to follow this algorithm precisely will result in shape characteristic curves that may be inaccurate. The final algorithm is as follows:

1. The number of discrete points desired for the shape characteristic curve is specified

2. The solution to Poisson’s equation is normalized between zero (0) and one (1)

3. Each point on the shape characteristic curve is solved for iteratively:

a. A level map containing all pixels from the normalized potential greater than or equal to the current level is created; for the zero level set, zeros are not included (to avoid using the background)

b. A “26-connected” perimeter (3D) is computed to find the surface, Si

c. The surface is used as a mask for gathering the correct pixel values from the displacement map

d. Nu (v) is computed for this set of pixels (see Section ‎3.4); if there are no pixels at the level set, which rarely occurs for the level set equal to exactly one (1), the complexity is assumed to be zero (0)

VALIDATION STUDIES

Extensive validation studies were conducted in 2D and 3D to ensure the integrity of the implementation of the described methods in this work within the framework of the toolset. The studies also provide a basis for verifying the hypothesis of this work, that tumor shape (and any other shape for that matter) can be adequately described and characterized in at least three dimensions by the shape characteristic curves.

1 ANALYTICAL SOLUTION COMPARISON

MATLAB provides solutions for Poisson’s equation in 2D for a circle (via the PDE toolbox) and a rectangle (via the matrix gallery). These test shapes were used to ensure that the iterative method for finding the potential, described in Section ‎2.3.2 and implemented in the toolset, is reliable. The first test case is a 2D circle with a radius of 100 (at 1 pixel per unit). The potential map created by the analytical solution is shown in Figure ‎6.1 and for the iterative finite differences method in Figure ‎6.2. The iterative method took 7,215 iterations to complete. The error was computed by subtracting the solution to the iterative method from the solution to the direct method. The mean error between the direct solution and the iterative solution is 0.0067. Figure ‎6.3 shows a map of the error. Note the error at the boundary due to the discretization of the circle.

[pic]

Figure ‎6.1: Image of the potential of a circle found by using the analytical, direct method.

[pic]

Figure ‎6.2: Image of the potential of a circle found iteratively using a Gauss-Seidel traversal.

[pic]

Figure ‎6.3: Error map of discrepancies between the direct and iterative methods for computing the solution to Poisson’s equation.

The second test case analyzed is a 2D square with side length of 100 (at 1 pixel per unit). The potential map created by direct method is shown in Figure ‎6.4 and for the iterative finite differences method in Figure ‎6.5. The iterative method took 8,355 iterations to complete. The mean error between the two is 0.000264. Figure ‎6.6 shows a map of the error. Note that there is little boundary error in this case because the boundary is consistent and precise due to the flat edges.

[pic]

Figure ‎6.4: Image of the potential of a square found by using the analytical, direct method.

[pic]

Figure ‎6.5: Image of the potential of a square found iteratively using a Gauss-Seidel traversal.

[pic]

Figure ‎6.6: Error map of discrepancies between the direct and iterative methods for computing the solution to Poisson’s equation.

The error for both the circle and square were found to be acceptable for our uses. Thus, reliability of our toolset for generating the solution to Poisson’s equation is established.

2 TWO-DIMENSIONAL SYNTHETIC STUDIES

1 COMPARATIVE STUDY

By definition a circle is “infinitely smooth” at the surface. In terms of this method, a shape that is infinitely smooth has a complexity of zero. Only two shapes theoretically exhibit this behavior: the circle and a single point. A single point only occurs in a discrete system at the global maximum of the solution to Poisson’s equation. At such a point, the complexity metric becomes zero mathematically as the standard of deviation from one value is zero. The mathematical proof is shown here.

[pic] (Eq. ‎6-1)

[pic] (Eq. ‎6-2)

[pic] (Eq. ‎6-3)

[pic] (Eq. ‎6-4)

[pic] (Eq. ‎6-5)

From this, it is expected that all shapes have zero complexity at the global maximum. This makes intuitive sense because Poisson’s equation smoothes the interior of an object. This is equivalent to making the level sets more circular near maxima.

Similarly, an ideal continuous circle has a complexity of zero. The potential level sets of a circle will have equal geodesic distance from the boundary. Thus, their standard of deviation will be zero (and as such, their coefficient of variance). This rule holds true for all potential level-sets of the circle, producing a flat shape characteristic curve at zero. It can be expected that a non-zero flat shape characteristic curve will arise from an ovular shape because it is essentially a circle with some elongation. This elongation will cause a rise in complexity, but its interpretation is straightforward.

Often with computer implementation, the theoretical expectation does not match the end result. This is due to discretization errors introduced when using a finite grid. In the case of the circle, some error is expected, especially as the iterative algorithm operates near the discretized boundary. The computer algorithm expects the boundary to occur in the exact center of a boundary pixel. Since this is not the case analytically, distance errors are introduced causing a non-uniform displacement near the surface of the object, especially when using sparsely sampled data. In terms of the shape characteristic curve, it is expected that the curve will have more error near E = 0 (the left side of the shape characteristic curve).

Having proven the reliability of the implementation of the numerical method, a number of shapes were investigated to determine shape characteristic properties. First, basic two-dimensional geometric shapes were used to verify that Poisson’s equation and the shape characteristic curves were correctly implemented according to Haidar et al3. The phantoms are discretized from ideal vector representations of the shapes to ensure their accuracy (Figure ‎6.7).

[pic] [pic] [pic] [pic]

Figure ‎6.7: Binary silhouettes of synthetic shapes from left to right; a circle, oval, square, and triangle.

Major discrepancies were discovered with the original work published by Haidar et al. compared to our results. However, this is the only set of reference data available at this time. The silhouettes in Figure ‎6.7 were carefully examined in order to develop an intuitive expectation for the types of curves they should produce. The image solutions to Poisson’s equation and the displacement maps for the basic silhouettes seem to match the published data. However, the shape characteristic curves did not match our results. The published curve for the oval did not match the hypothesized behavior (of a non-zero flat line) while the circle produced a graph that was perfectly zero (numerically very hard to produce).

The two sets of data were different to the extent that the authors were contacted to confirm the published results. Contact with the corresponding author revealed that the incorrect graph may have been published, and it is our belief that this graph is based on previous research by Haidar et al. using a completely different streamline computation4, 8, 9.

Our toolset implementation of the algorithm matched the hypothesis for the basic geometric objects. Figure ‎6.8 - Figure ‎6.11 shows results for the potential solved for the interior of the basic silhouettes along with the corresponding displacement maps. Note the small numerical errors in the displacement maps of the oval and triangle. This is due to the finite differencing process when operating on perfect geometry. Such inaccuracies are also found in Haidar et al3.

Figure ‎6.12 shows the shape characteristic curves computed for the shapes in Figure ‎6.8 - Figure ‎6.11. Note the near-zero curve for the circle with increasing error near the boundary (observed at E = 0). The oval also has a flat curve after a sharp increase in complexity initially. It also experiences numerical error near the boundary much like the circle. The square and triangle have higher complexities as expected. This graph does not match the published data in Haidar et al., but we believe that these results represent the correct curves using the approach presented here3.

[pic] [pic]

Figure ‎6.8: The solution to Poisson’s equation, or the potential for a square (left) and the corresponding displacement map (right).

[pic] [pic]

Figure ‎6.9: The solution to Poisson’s equation, or the potential for a circle (left) and the corresponding displacement map (right).

[pic] [pic]

Figure ‎6.10: The solution to Poisson’s equation, or the potential for an oval (left) and the corresponding displacement map (right). Arrows identify areas of minor numerical error.

[pic] [pic]

Figure ‎6.11: The solution to Poisson’s equation, or the potential for a triangle (left) and the corresponding displacement map (right). Arrows identify areas of minor numerical error.

[pic]

Figure ‎6.12: The shape characteristic curves for a circle (dashed line), oval (dash-dotted line), square (solid line), and triangle (dotted line).

2 VARIOUS STRUCTURE STUDY

A circle, elbow, square, ellipse, triangle, star, circle pair, doughnut and connected circle pair were examined to understand the output from these various structures (Figure ‎6.13). The solution to Poisson’s equation for each shape is shown in Figure ‎6.14. The corresponding displacement maps are shown in Figure ‎6.15. Finally, the shape characteristic curves are shown in Figure ‎6.16. These plots establish deeper connections to the hypothetical understanding the shape characteristic curve: the shape with the largest amount of boundary complexity, the star, produces the highest curve as expected; the circle pair produces a similar curve to the single circle, yet with a slight increase in complexity; the connected circle pair produces a non-monotonic curve; and the doughnut has a similar characteristic to the circle and oval.

[pic] [pic] [pic]

[pic] [pic] [pic]

[pic] [pic] [pic]

Figure ‎6.13: Binary representation of the test shapes.

[pic] [pic] [pic]

[pic] [pic] [pic]

[pic] [pic] [pic]

Figure ‎6.14: The solution to Poisson’s equation for nine different shapes.

[pic] [pic] [pic]

[pic] [pic] [pic]

[pic] [pic] [pic]

Figure ‎6.15: The displacement maps of nine different test shapes.

[pic]

Figure ‎6.16: The shape characteristic curves of nine different 2D synthetic shapes.

3 TRANSLATION, ROTATION, AND SCALE INDEPENDENCE

Translation, rotation, and scale independence were specifically examined for the next study. Six 2D squares and six 2D triangles of varying size, orientation, and location were used (Figure ‎6.17). Figure ‎6.18 and Figure ‎6.19 show the solution to Poisson’s equation for each of the shapes. Figure ‎6.20 and Figure ‎6.21 show the corresponding displacement maps. The shape characteristic curves for all shapes are plotted in Figure ‎6.22 along with the mean curves for each class in Figure ‎6.23. Note the similarity within classes and the substantially different means. A permutation test of the statistical significance of the results reveals a p-value < 0.00001, rejecting the null hypothesis that “There is no difference between the shape of squares and triangles of various orientations and sizes.”

[pic] [pic] [pic] [pic] [pic] [pic]

[pic] [pic] [pic] [pic] [pic] [pic]

Figure ‎6.17: Various squares (top row) and triangles (bottom row).

[pic]

[pic]

Figure ‎6.18: The solution to Poisson’s equation for various squares.

[pic]

[pic]

Figure ‎6.19: The solution to Poisson’s equation for various triangles.

[pic]

Figure ‎6.20: The displacement map for varying squares.

[pic]

Figure ‎6.21: The displacement map for varying triangles.

[pic]

Figure ‎6.22: The shape characteristic curves of the squares (red) and triangles (blue) used in the translation, rotation, and scale independence study.

[pic]

Figure ‎6.23: The mean shape characteristic curves of each class.

[pic]

Figure ‎6.24: The histogram of the Monte-Carlo permutation test, D0 = 0.2937.

These results allow the conclusion that shape characteristic curves are insensitive to translation, rotation, and scale. Thus, the measure is characteristic of the geometry of a given shape as expected. This is an important property that makes shape unique to others measures such as volume.

4 TWO-CLASS 2D SILHOUETTE STUDY

The silhouettes of 18 cats and 18 dogs are shown in Figure ‎6.25 and Figure ‎6.26, respectively. The shape characteristic curve was computed to determine if the proper level of statistical significance can be achieved when comparing the curves of each type of animal. Figure ‎6.27 and Figure ‎6.28 show the potential for each animal, and Figure ‎6.29 and Figure ‎6.30 show the corresponding displacement maps. Figure ‎6.31 reveals curves that are very similar, as expected, because the shape of a cat and dog are very similar in a 2D silhouette. However, a small difference in the higher potential region (E close to 1) points to a slightly more complex shape for cats compared to dogs. Some hints of this difference can be seen when examining the red portions of the potentials in Figure ‎6.27 and Figure ‎6.28. A permutation test of 100,000 permutations reveals a p-value < 0.00001. Thus, the null hypothesis, “There is no difference in the shape of cats and dogs in a 2D silhouette,” is rejected.

[pic]

Figure ‎6.25: 18 cat silhouettes.

[pic]

Figure ‎6.26: 18 dog silhouettes.

[pic]

Figure ‎6.27: The potential for 18 cats.

[pic]

Figure ‎6.28: The potential for 18 dogs.

[pic]

Figure ‎6.29: The displacement map for 18 cats.

[pic]

Figure ‎6.30: The displacement map for 18 dogs.

[pic]

Figure ‎6.31: The shape characteristic curves for 18 cats and 18 dogs.

[pic]

Figure ‎6.32: The mean shape characteristic curves.

[pic]

Figure ‎6.33: The permutation test histogram for the cat and dog study, D0 = 0.0908.

3 MULTI-FIGURE SYNTHETIC SHAPE STUDIES

The shape characteristic results for a single 2D shape have been thoroughly established in Sections ‎6.1 - ‎6.2. However, volumes of interest for this work often occur in multiple foci. It is, therefore, important to understand the implications of this added complexity. As stated before, it is theorized that all shapes have a unique shape characteristic curve3. While it is very difficult to prove this implicitly, our results show that there is some evidence to support this theory. Further, we suggest that this extends into multi-figure analysis. Figure ‎6.36 shows the shape characteristic curves for the variety of multi-figure shapes shown in Figure ‎6.34 and Figure ‎6.35. These curves reveal somewhat intuitive results based on the level-set complexity approach to this problem. For instance, curve 11 in Figure ‎6.36 has a relatively low complexity (E > 0.25), then a dramatic rise between E = 0.25 and E = 0.2. This is due to the dominance of the large circle until the level-set containing the spikes of the star are finally reached. Multi-figure images seem to track well together based on the predominant shape in a given set as shown by curves 1, 2, 3, and 4. The largest shape in these figures is the star, thus, a high complexity and tight congruence is expected. Minor deviations near the end of the curve mark the presence of the second figure and its roundness. Similarly, curves 5, 6, and 7 track together, as do 8 and 9, showing grouping according to the number of figures in the image. Curve magnitude is consistent with the complexity of the shapes within the image.

|[pic] |[pic] |

|Multi-test 1 |Multi-test 2 |

|[pic] |[pic] |

|Multi-test 3 |Multi-test 4 |

|[pic] |[pic] |

|Multi-test 5 |Multi-test 6 |

Figure ‎6.34: Multi-figure images.

|[pic] |[pic] |

|Multi-test 7 |Multi-test 8 |

|[pic] |[pic] |

|Multi-test 9 |Multi-test 10 |

|[pic] | |

|Multi-test 11 | |

Figure ‎6.35: Multi-figure images.

[pic]

Figure ‎6.36: Multi-figure shape characteristic curves.

4 THREE-DIMENSIONAL SYNTHETIC STUDIES

Verification of the method in 3D was performed to ensure that the basic algorithms were implemented correctly and that comparable results could be achieved. Figure ‎6.37 shows the potential and displacement map for an isometric 3D cube. Note the similar shape of both functions on the planar slices of the data.

[pic][pic]

Figure ‎6.37: The potential (left) and displacement map (right) for a 3D cube.

The difference for 2D and 3D objects can be directly compared in Figure ‎6.38. The shape characteristic curves for a square, cube, circle, and sphere are shown. The plot demonstrates similar characteristics of each complementary object as expected. The circle and sphere track exactly the same, as expected, since they have theoretically similar outcomes. The curve is as close to zero as the discrete method allows. A minor difference in the square and cube is observed due to the nature of the square/cube structures which are not symmetric about any given angle as is the case with the circle/sphere. Thus, there are additional shape complexity contributions from the additional vertexes of the cube. From this, one can expect the cube to have a slightly higher shape characteristic curve as seen in the figure.

[pic]

Figure ‎6.38: Comparison of 2D and 3D shape characteristic curves for complementary objects.

A statistical test of permutation was conducted to check that the deciphering capabilities of the curves remain intact when translated to 3D. Six isometric cubes and six isometric spheres were used in the study. The cubes were of varying size between 10 pixels on the edges to 60 pixels and at various rotations. The spheres also varied in size from a radius of 10 to 30 pixels. Figure 6.37 shows that the shapes group together nicely with a large difference measure as seen in Figure 6.38. The null hypothesis, “There is no shape difference between 3D isometric cubes and spheres of varying size and orientation,” is rejected with a p-value < 0.00001.

[pic]

Figure ‎6.39: The shape characteristic curves for various 3D cubes and spheres.

[pic]

Figure ‎6.40: The mean shape characteristic curves for the groups of cubes and spheres.

[pic]

Figure ‎6.41: The permutation test histogram from the 3D spheres and cubes study, D0 = 0.703.

5 THREE-DIMENSIONAL ANATOMIC TEST STUDIES

Section 4.4 outlines a test of the Monte-Carlo permutation test using anatomic data from this portion of the work. Statistical verification of 3D anatomical structures is found there and not repeated in this section for the sake of brevity. Instead, this section outlines some of the preliminary work for that study and other related efforts. Data was used from the Internet Brain Segmentation Repository (IBSR) as starting point in anatomical studies. This data provides preliminary verification of the metric and related processes within an anatomical dataset.

The IBSR is an online archive of medical images acquired from CT, MRI, and PET scans of the brain (). Data are submitted by scientists from around the world to share with other image processing scientists as test datasets. Much of the data includes detailed segmentation files that can be used in shape imaging studies. The IBSR was used to obtain brain MRI data for 18 normal human subjects. These subjects are all considered normal and healthy at the time of imaging. A custom image reader was developed to import scan data into MATLAB for evaluating the function of our shape metric.

[pic] [pic]

Figure ‎6.42: Brain image from the IBSR (left) and an example segmentation (right).

The imported data from the IBSR includes multi-slice segmentations of human brain anatomy. The software toolset outlined in this work converts IBSR data into three-dimensional binary volumes for direct input into the Poisson-based shape characterization algorithm. Typical processing time for this data is about 30-45 seconds total. This includes read time, approximately 200-500 iterations to find the potential, ~ 50 iterations to compute the displacement map, and computation of the shape characteristic curve with a sample rate of 0.05 equipotential increments. Computation occurred on a PC with a Pentium 4-2.8 GHz processor using 512 MB of RAM running Windows XP. Each dataset required approximately 10 or more seconds to compute the shape characteristic curve.

Figure ‎6.43 and Figure ‎6.44 show the 3D potential and displacement map respectively for a normal caudate nucleus of a human. The figures feature a 3-slice view of the volume for simple internal visualization. Figure ‎6.45 shows the shape characteristic for this volume sampled every 0.05 equipotential increments.

[pic]

Figure ‎6.43: The potential for a 3D volume of the human caudate nucleus.

[pic]

Figure ‎6.44: The displacement map for a 3D volume of a human caudate nucleus.

[pic]

Figure ‎6.45: The shape characteristic curve for a normal human caudate nucleus.

Figure ‎6.46 - Figure ‎6.48 shows data from all 18 human caudate nuclei. It is easy to see that this anatomic structure has a particular shape characteristic curve that is consistent for all subjects as well as for the right and left caudate nuclei. In Figure ‎6.48 a difference in grouping for the right caudate and right pallidum is shown. For the purpose of this research, it is important that the shape of a given anatomical structure has the same shape characteristic curve in all subjects. Again, the statistical significance of several test structures from the data is described in Section ‎4.4.

[pic]

Figure ‎6.46: Shape characteristic curves for 18 different healthy adult caudate nuclei (right side) of the brain.

[pic]

Figure ‎6.47: Comparison of the right (red) and left (left) caudate nuclei in 18 normal human subjects.

[pic]

Figure ‎6.48: Comparison of the right pallidum (red) and the right caudate (blue).

6 PHYSICAL PHANTOM STUDIES

The final part of the verification process is a physical phantom study. The purpose of this step is to test the overall process and to look for any important errors that come about as a result of performing a physical study. Previously, data used in studying the shape characteristic curves were generated by an imaging program or within MATLAB, except for the data acquired from the IBSR. Three areas of interest are addressed by this study: (1) the costs of imaging known shapes in terms of resolution, image noise, and phantom variability; (2) the effects of voxel size and downsampling/upsampling to achieve isotropic dimensions; and (3) the process of acquiring imaging data from the clinic, segmenting volumes of interest, and converting into a binary map for shape characteristic analysis.

Six sphere phantoms (Figure ‎6.49) and three cube phantoms (Figure ‎6.50), all water-filled, were used in this study. The sphere phantoms vary in diameter from 1 cm to 3 cm. The cube phantoms vary in size from 1.5 cm3 to 4.5 cm3 on the interior dimensions. All phantoms were randomly arranged with various rotations and translations for imaging. Small screws allow access to the interior of the phantoms for adding contrast. For both the cubes and spheres, the access screws create a slight shape difference compared to ideal representations. In the spheres, screws connect to a tube on the end of the sphere. A small amount of water extends past the sphere boundary into this tube. In the cubes, two access screws extend into the volume of the cube to seal the phantom. Each screw creates a small shape difference related to its relative size creating small geometric differences within the cube and sphere groups separately. Figure ‎6.52 shows the signal change from the presence of the seal screw in a cube.

[pic]

Figure ‎6.49: Six water-filled sphere phantoms.

[pic]

Figure ‎6.50: Three water-filled cube phantoms of varying size.

The phantoms were scanned together in a 3T GE MRI scanner using an axial spoiled gradient recalled (SPGR) pulse sequence, as used in Section ‎7.2 for the clinical trial. This scan was chosen because of its high contrast for malignant glioma and the large volume of clinical data available for study. It is ideal to use the same sequence employed in the clinical trial to ensure comparable results, however, it is also true that water in the phantoms have different MR signals compared to glioma and normal brain tissues. The phantom scans were acquired with a 4-channel head coil using a voxel size of 0.39 mm x 0.39 mm x 1 mm, where 1 mm is the slice thickness (z direction). Slices were acquired contiguously for a 20 cm x 20 cm field of view (512 x 512 x 200 voxels). Figure ‎6.51 shows a coronal slice (reconstructed from an axial scan) of the phantoms as seen in the MRI scan.

[pic]

Figure ‎6.51: Physical phantom MRI coronal view.

[pic]

Figure ‎6.52: Axial slice of physical phantoms revealing an example of a screw in cube phantoms (top of left cube).

[pic]

Figure ‎6.53: Segmentation contours of a sphere and cube.

MIPAV (Medical Image Processing, Analysis, and Visualization) is an application developed at the NIH (National Institutes of Health) for the purpose of providing a set of tools for image analysis. It is the goal of the MIPAV group to combine various fields of expertise for solving problems involving image computation and visualization. The tool is available by free download (closed-source) from the NIH website: . In our studies, MIPAV was used to visualize and segment MRI data (from both phantoms and subjects), store volume-of-interest (VOI) data, form isotropic datasets, and measure volume.

Phantom image data were transferred from the MRI scanner to a separate workstation for image processing. DICOM images were read by MIPAV and subsequently segmented. Image segmentation was accomplished by using MIPAV’s level set contouring tool. This provided quick and reliable segmentation on a slice-by-slice basis (Figure ‎6.53). Once the segmentations were complete (creating VOIs), the data were saved to the hard disk. Next, the VOIs from each object in the scan were selected and separate binary maps were created such that only one logical object existed in each 3D dataset. Each dataset was then downsampled to create isotropic voxels according to the minimum resolution (1 mm). A separate reslicing of the original binary map was also conducted to created isotropic voxels by upsampling.

Data from each object and for each voxel size were imported into MATLAB as a binary map (logical data type) for shape characteristic analysis. This portion of the study has several objectives: (1) confirm shape similarity with ideal (digitally generated) phantoms; (2) confirm the difference in isotropic voxels versus the normal anisotropic dimensions; (3) confirm similarity of coarse (downsampled) and fine (upsampled) resolution objects, expected because of scale independence; (4) determine the best isotropic sampling method for classification; and (5) confirm that the shape characteristic curves accurately classify MRI data acquired in a similar manner as clinical data.

Figure ‎6.54 shows the shape characteristic curves for the cube phantoms at all voxel sizes as well as an ideal cube (30 pix3). The flat edges of the cube make the two different isotropic sizes indistinguishable. Both match well to the ideal cube. The normal (anisotropic) sizes are notably different as expected. From this plot, no case can be made for either isotropic sample method. Figure ‎6.55 shows similar plots for the sphere phantoms. An even larger difference is observed for the anisotropic compared to the isotropic data. The finely sampled isotropic data appears to conform much better to the ideal representation for this object. The remaining difference in the data is likely due to shape changes from the screw hole on each sphere (as the ideal sphere does not account for this).

[pic]

Figure ‎6.54: Shape characteristic curves for an ideal cube and three MRI-scanned cubes with various voxel dimensions.

[pic]

Figure ‎6.55: Shape characteristic curves for an ideal sphere and six MRI-scanned spheres with various voxel dimensions.

A study was conducted to examine the ability of the shape characteristic curves to discriminate between cubes and spheres for both the upsampled and downsampled isotropic data. Shape characteristic curves for the downsampled, low resolution data are shown in Figure ‎6.56. Note the higher complexity of the overlapping blue curves due to the square-like shape of the smaller downsampled spheres. The baseline mean shape characteristic curves are shown in Figure ‎6.57 and the subsequent permutation test results in Figure ‎6.58. The null hypothesis, “There is no difference between the shape of three downsampled isotropic cubes versus six downsampled isotropic spheres,” is not rejected with p = 0.1349. Thus, it can be concluded that this method has very poor discrimination abilities when dealing with downsampled MRI data.

[pic]

Figure ‎6.56: Shape characteristic curve comparison for three cubes (red) and six spheres (blue), all downsampled to the maximum voxel dimension to create a relatively low-resolution binary map.

[pic]

Figure ‎6.57: Mean shape characteristic curves of the low resolution isotropic phantom data.

[pic]

Figure ‎6.58: The permutation test histogram for the low resolution isotropic phantom discrimination test. D0 = 0.19435.

Next, a study of the upsampled (high resolution), fine isotropic data was conducted to determine the discriminative ability of the shape characteristic curves. Shape characteristic curves of the upsampled isotropic data can be seen in Figure ‎6.59 along with the mean curves (plotted with the ideal curves) in Figure ‎6.60. The results of the Monte-Carlo permutation test successfully reject the null hypothesis, “There is no difference between the shape of three upsampled isotropic cubes and six upsampled isotropic spheres,” with a very high level of significance, p = 0.0001. Thus, we can draw an important conclusion from both this study and the previous that the upsampled (resliced) isotropic data is preferred for shape characteristic comparison tests when using anisotropic MRI scans with SPGR. The loss of geometric integrity when downsampling reduces the discrimination power of the shape characteristic curve to insignificant levels.

[pic]

Figure ‎6.59: The shape characteristic curves of the finely sampled phantom data.

[pic]

Figure ‎6.60: Mean shape characteristic curves of the finely sampled phantom data, plotted with the curves of the ideal shapes.

[pic]

Figure ‎6.61: The permutation test results of the finely sampled phantom data. D0 = 0.44819.

Finally, to reinforce the difference between the two types of isotropic data, a statistically significant difference (p < 0.0001) between the shape of the upsampled and downsampled spheres is proven. We reject the null hypothesis, “There is no difference between the shape of six upsampled and six downsampled isotropic spheres,” with a high level of confidence. Figure ‎6.62 shows the mean shape characteristic curves, and Figure ‎6.63 shows the corresponding histogram from the Monte-Carlo permutation test.

[pic]

Figure ‎6.62: The mean shape characteristic curves of the upsampled (fine; blue) and downsampled (coarse; red) sphere phantom data.

[pic]

Figure ‎6.63: The results of the permutation test comparing the finely and coarsely sampled spheres. D0 = 0.24453.

CLINICAL TRIAL

Thorough validation of the shape characteristic curve (Chapter 6) has enabled moving forward with clinical trials for tumor shape characterization. The trial presented in this work is a study of the relationship between brain tumor shape and the outcome of subject survival. Such a paradigm has not been studied before, and it presents the possibility of extracting a new indicator of outcome that could be of great value to the field of oncology.

1 BACKGROUND

Little is known about the role that shape plays in tumor progression and disease outcome. While it is generally accepted that the complex geometric presentation and/or progression of malignant disease translates into poor outcome, proper quantification of this property has not been explored. Malignant glioma is one such disease where shape may play an important role in outcome. However, many other factors such as age, disease site, and treatment are important as well. It is theorized that outcome is independent of tumor size (quantified by volume); rather, shape complexity (quantified by the shape characteristic curve) plays a significant role in classification.

[pic] [pic]

Figure ‎7.1: Two different examples of malignant glioma, grade IV (GBM).

Glioblastoma multiforme tumors (GBM) arise from lower-grade astrocytomas and are by far the most common glial tumor. They account for over 50% of all primary brain tumors and most present in adults greater than 50 years of age55. Treatment of these tumors is palliative with a mean survival of 7.5 months for all patients or 12 months for patients given optimal treatments57. GBMs typically have a contrast-enhanced ring in both CT and MRI images comprised of neoplastic cells and enclosing areas of necrosis or cystic regions56. Their reliable solid geometric arrangement and contrast enhancement makes shape analysis a possible option for characterizing these tumors.

2 METHODS

This clinical trial was conducted according to the Wake Forest University Health Sciences Institutional Review Board (IRB) policy with an IRB-approved protocol (#IRB6225: Retrospective imaging study of patients with malignant glioma). Potential subjects were identified retrospectively based on WHO Grade 4 pathologic diagnosis from a biopsy or post-surgical specimen for all patients seen at Wake Forest University Baptist Medical Center between January 2000 and January 2008. Imaging data for each potential subject were reviewed, and the following criteria were used to identify acceptable subjects: (1) an available MRI study with no prior surgical operations, chemotherapy, or radiation therapy to the head (excluding simple biopsy procedures); (2) a spoiled gradient recalled (SPGR) MRI scan with contiguous slices and a maximum slice thickness of 3 mm; and (3) contrast-enhancing mass appearing on more than two slices. The SPGR sequence was selected for several reasons: (1) most GBM patients have high-resolution SPGR scans; and (2) GBM has a high amount of contrast enhancement when imaged with SPGR. To avoid possible bias, survival data were not collected until the statistical analysis portion of the study was underway. Subject selection and subsequent data collection took approximately six weeks to complete. In all, 98 subjects were selected after a final review based on the above criteria.

Reliable methods for automatically segmenting GBM are not available at this time. As such, manual segmentation must be performed. Segmentation was accomplished using the MIPAV application. First, the axial data were contoured slice-by-slice by a trained observer. Only tumor mass (including necrosis) was contoured; edema and non-involved neighboring vasculature were excluded. Next, each contour was thoroughly reviewed and adjusted by an expert observer: a highly-trained and experienced neurosurgeon. The shape classification scheme presented in this work relies on ideal segmentation, thus a large assumption is made that the expert review produces consistent, reliable identification of the tumor mass.

Contours (known as VOIs in MIPAV) were selected and converted into a single binary volume with the same resolution as the original scan for each patient. The data were then resliced (or upsampled) using trilinear interpolation to form isotropic voxels with dimensions equal to the minimum voxel dimension. The data were then saved into the Analyze (.img) format and a header file (.hdr). A MATLAB script (experiment_clinical_trial.m) ran through each subject and trimmed the binary image down to the minimum dimensions surrounding the tumor, then added a 2-pixel padding to allow Poisson’s equation to read the energy of the field outside of the boundary as required by the Dirichlet boundary conditions. The 3D solution to Poisson’s equation, corresponding displacement map, and finally, a shape characteristic curve were then determined. These data are stored separately, variable by variable, within the subject’s DICOM file directory for easy retrieval.

Survival information for each subject was gathered by a department clinical trials coordinator as a very last step to eliminate bias in the subject selection and segmentation process as well as to allow the maximum time possible for subject survival data to be acquired by the hospital’s record keeping system. Survival was documented as the number of days until death following the acquisition of the pathologic sample. This process brought our final subject number down from 101 to 98 as three subjects had expired with unknown dates of death. Of the 98 remaining subjects, 11 were still living at the time of the study.

Statistical analysis was conducted using a Monte-Carlo permutation test (Section ‎4.3). The survival data were used to group the subjects into surviving and expired classes, and the analysis was conducted iteratively for 1 – 18 months. The maximum month 18 was used because it is the longest survival time of a subject in the study. Several other groupings were recorded including subjects less than 75 years of age, without brainstem involvement, single focal tumors, and single hemisphere involvement. However, a lack of sample size did not allow a statistically significant result from each of these subpopulations. The median shape characteristic curve values (evaluated at each point separately) were used as a metric for grouping the population as opposed to the mean which was too sensitive to outliers (see results in Section ‎7.3). The median measure is an acceptable replacement to the mean when evaluating a subject population. The validation results in Chapter ‎6 are still valid and comparable to this metric as the median produces precisely the same result as the mean in the case of any geometric phantom study where independent force that affect the outcome are not present (as in this clinical trial).

3 RESULTS

The Kaplan-Meier survival curve for the collected subject data is shown in Figure ‎7.2. Our survival data are consistent with published GBM data for mean survival57. The curve is nearly logarithmic, as a log curve can be fit to the survival data with R2 = 0.975. Thus, a rapid decline in survival in the first 8 months is observed compared to the longer surviving subjects. Clinically, this could be the result of subjects surviving due to unusual treatment or therapy response. Such subjects may skew the data, thus, the median reduces the sensitivity to such issues. For example, the subject with the highest level of complexity (defined by the shape characteristic curve) in this group is still surviving 18 months after diagnosis despite having a multifocal tumor with invasion into both hemispheres of the brain as well as the brainstem.

From the survival curve, it is also observed that > 25% of the population is expired by month three and < 25% of the population is surviving at month 16. To maintain the integrity of the permutation test statistic, only data grouped between months 3 – 15 are considered appropriately sampled enough to form a statistically significant result. A small grouping of data creates unreliable p-values when using a Monte-Carlo permutation test (see Chapter ‎4).

[pic]

Figure ‎7.2: Survival curve for GBM clinical trial (N = 98).

Shape differences were explored as a function of time for months 1 – 18 following specimen acquisition (and subsequent diagnosis). As stated before, data between months 3 – 15 are only considered due to the sample size of a given group falling below 25% of the total population. For each month, the population was split into two groups: the surviving fraction and the expired fraction. A Monte-Carlo permutation test (using 100,000 permutations per test) was used to determine the difference between the groups with the null hypothesis, “There is no difference between the tumor shape of expired and surviving subjects at (#) months after diagnosis.” This null hypothesis and approach allows the examination of the subjects at a given point in time to see if the shape characteristic curves are significantly different.

Figure ‎7.3 - Figure ‎7.17 show the shape characteristic curves, median group curves, and Monte-Carlo permutation test histogram using each month following diagnosis as a separation point between groups. A general trend should be noted: shape characteristic curves from expired subjects (in blue) occur at higher levels compared to surviving subjects. This is also seen in the median curves for each figure. Months with higher statistical significance generally have median curves with a greater amount of separation. Months one and two do not have enough expired subjects to consider for statistical significance but figures are included to show the trend.

[pic]

Figure ‎7.3: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than one month.

[pic]

Figure ‎7.4: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than two months.

[pic]

Figure ‎7.5: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than three months.

[pic]

Figure ‎7.6: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than four months.

[pic]

Figure ‎7.7: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than five months.

[pic]

Figure ‎7.8: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than six months.

[pic]

Figure ‎7.9: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than seven months.

[pic]

Figure ‎7.10: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than eight months.

[pic]

Figure ‎7.11: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than nine months.

[pic]

Figure ‎7.12: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than ten months.

[pic]

Figure ‎7.13: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than eleven months.

[pic]

Figure ‎7.14: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than twelve months.

[pic]

Figure ‎7.15: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than thirteen months.

[pic]

Figure ‎7.16: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than fourteen months.

[pic]

Figure ‎7.17: Shape characteristic curves and the permutation test histogram for the clinical trial data grouped by survivors greater than fifteen months.

The shape characteristic curve plots show a general trend of expirations occurring mostly in the higher complexity regions of the curve in the first 12 months. This is confirmed with statistical significance for months 1 – 12 following diagnosis. Thus, the null hypothesis is rejected for this time period. Months one and two are ignored due to sample size of the expired group. This maintains the integrity of the statistical test as stated previously. Figure ‎7.18 shows the p-values for each of the permutation tests. The dashed line indicates the region of statistical significance (p ≤ 0.05). Note the loss of significance one year post-diagnosis.

[pic]

Figure ‎7.18: Statistical significance of clinical trial permutation tests shown for each month.

|Months (Post-Diag) |p-value |Subjects Expired |Subjects Surviving |

|1 |0.09617 |6 |92 |

|2 |0.02047 |16 |82 |

|3 |0.01729 |28 |70 |

|4 |0.01122 |34 |64 |

|5 |0.01839 |35 |63 |

|6 |0.02906 |40 |58 |

|7 |0.04060 |45 |53 |

|8 |0.01070 |54 |44 |

|9 |0.01155 |56 |42 |

|10 |0.02261 |57 |41 |

|11 |0.01874 |60 |38 |

|12 |0.02762 |62 |36 |

|13 |0.10593 |64 |34 |

|14 |0.05534 |69 |29 |

|15 |0.09031 |70 |28 |

|16 |0.10353 |74 |24 |

|17 |0.16520 |77 |21 |

|18 |0.20113 |78 |20 |

Table 2: Clinical trial statistical significance for each month post-diagnosis including expired and surviving subject counts. Significant p-values are bolded.

Additional studies were organized that excluded subjects based on different criteria including disease location, age, and number of foci. These were prepared individually for each factor mentioned and also in various combinations. However, all were discarded due to an insufficient amount of data. It may be possible to examine dependence on such features, but a much larger dataset must be acquired.

Tumor volume was recorded for each subject to examine if a relationship existed between volume and survival. Figure ‎7.19 shows a plot of survival versus tumor volume. No statistically significant relationship between the two could be found using curve fitting. Functions used included linear, quadratic, exponential, logarithmic, and power with the maximum R2 = 0.0463 when using a logarithmic function. Inspection of Figure ‎7.19 reveals a large variance in volume compared to survival confirming the insignificant relationship between survival and tumor volume.

[pic]

Figure ‎7.19: Survival versus tumor volume for clinical trial subjects.

4 DISCUSSION

Statistically significant groupings of expired versus surviving fractions for months 3 – 12 are shown in Table 2. Thus, it is concluded that the cues derived from shape characteristic curves of GBM tumor are capable of classifying subject outcome by survival following pathological diagnosis. A general trend is noted for early expirees: most have shape characteristic curves in the upper regions of the range of curves. Thus, their tumors are geometrically more complex compared to those of survivors at the same time period.

The median difference measure produced a stable result compared to the mean when examining the subject population found in this clinical trial. The reason for this is an inherent skew of the data from factors such as age, disease location, and others that are much more difficult to quantify. One such outlier was an older subject with a small, circular tumor that expired within several weeks of diagnosis. At the time of imaging and diagnosis, the subject was already incoherent and beginning to quickly decline physically. The resulting shape characteristic curve was one of the lowest in the study. Similarly, a subject with a highly complex tumor (and the highest shape characteristic curve in the study) was still surviving at the time of the study (greater than 18 months). Note in Figure ‎7.17 the highest curve in the top left plot is this surviving subject.

In a mean-based study, skew causes a high level of bias from one or two individual outlier subjects where factors outside of tumor geometry could be directly affecting outcome. It is import that such subjects are not capable of changing the results of the study dramatically, as observed when using the mean difference metric for Di. The final data were generated while including both of these outliers. However, it should be noted that the mean is an appropriate metric when grouping by factors that are not affected by variable subject populations, as seen in the normal brain anatomy study and the synthetic studies. Grouping a collection of spheres and squares is not subject to population-based factures. Thus, the median is an appropriate metric for this clinical trial, but the mean is appropriate for the validation studies in Chapter ‎6.

Figure ‎7.1 found earlier in this chapter shows an example of two different GBM tumors, one with a round, less-complex shape and the other with a highly irregular shape. The resulting shape characteristic curve of the tumor on the right (the tumor with the more complex appearance) has much higher values compared to the tumor on the left. Tumors with larger-valued shape characteristic curves may exhibit more aggressive disease and corresponding invasiveness. Thus, it is expected that survival for subjects with complex, invasive tumors is worse compared to round, compartmentalized tumors. It is also qualitatively noted that less complex tumors tend to grow in geometries that are spherical with easily delineated boundaries. One theory is that these tumors have less microscopic invasion and are more likely to respond to surgery or radiation therapy. The shape characteristic curve provides direct geometric quantification of this property.

An interesting finding from this work is that tumor volume for GBM does not correlate to survival suggesting that the shape property observed is size (or volume) independent. The biological basis for this type of finding is unknown at this time. This finding may lead clinicians to reconsider the role of tumor volume as a prognostic indicator as well as a factor in deciding treatment for some types of tumors. Rather, shape may play a statistically significant role and should be considered in the planning phases of treatment.

The methods presented in this work could integrate into the clinical environment in a variety of ways. A special plug-in to standard radiology DICOM viewers or picture archiving and communication systems (PACS) would allow radiologists to create shape characteristic curves for subjects with malignant diseases such as GBM where shape has shown a statistically significant correlation to outcome. The software could also be integrated into various radiation treatment planning systems (TPS) for use in a variety of future studies.

CONCLUSIONS AND FUTURE DIRECTIONS

A modular toolset was created in MATLAB capable of performing shape analysis on a variety of data. The software can process synthetic shapes in 2D as well as 3D clinical data from CT, MRI, and a variety of other sources that utilize DICOM, Analyze, or 2D sliced image data. The toolset serves as a core group of programs for future applications related to Poisson-based shape investigation as well as a reference library for running examples used in this work.

An extensive validation of the shape characteristic curve derived from Haidar et al. proves its usefulness in 2D and 3D shape characterization problems in synthetic and anatomic shapes3. A number of undocumented properties of the shape characteristic curves have been found and explored including (1) the non-monotonic nature of the curves (Sections ‎3.4 and ‎6.3); (2) scale, translation, and rotation independence (Section ‎6.2.3); (3) convergence to zero at the global maximum and the boundary effect (Section ‎3.4); (4) the differences in 2D and 3D shape representations (Section ‎6.4); (5) an examination of the multi-figural image problem (Section ‎6.3); (6) the effect of isotropic voxel upsampling and downsampling (Section ‎6.6); and (7) the use of the median as a metric of group statistics in a population (Section ‎7.2). We conclude from the validation work in Chapter ‎6 that the shape characteristic curve is stable, robust, and well-characterized for classifying 3D brain tumors. The clinical trial presented in Chapter ‎7 shows shape-related differences in surviving and expired subject fractions for up to one year after diagnosis. We believe that this work provides a theoretical and experimental basis for pursuing further anatomic studies using shape characteristic curves, especially in the field of oncology in relation to tumor geometry.

One future direction of this work clinically should be to explore the effects of tumor location and number of foci on the shape characteristic curve as well as demographic factors such as age, treatments pursued, and other prognostic indicators. These factors may influence tumor shape and the resulting shape characteristic curves in ways that improve the predictive ability of the method. It is also possible that certain factors are more influential than shape, thus, their data should be excluded from shape-based studies. Additionally, this work could be used to explore the shape signatures of other tumors and disease sites including spiculated breast lesions where simple measures of shape complexity are common indicators of malignancy71-73. The shape characteristic curve may have more accurate discrimination in such cases, as it includes both global and local properties of shape.

Another difficult problem that could be addressed with this method is the analysis of higher dimensional data. Time-related changes in tumor geometry imaged using 4D scanning techniques may prove useful in guiding treatment decisions. This could be particularly useful in lung tumors where the shape is constantly changing with the motion of the lung. Images that are registered to biologically derived data could reveal new characteristics of bioanatomic shape that are otherwise difficult to observe. Biological factors derived from methods such as fluorescence imaging, MRS, and pathological reports could be integrated into spatially distributed maps. Assuming ideal registration, distinct geometric arrangements that can be uniquely characterized by the shape characteristic curve may exist. Indicators such as this may be useful in identifying the shape of tumor cell invasion into surrounding areas of the environment and could provide important prognostic indicators or treatment guidance.

One problem that has not been addressed by this work is that of quantifying tumor shape change over long periods of time (using serial images). We have begun working on this using Laplace’s equation in a similar manner in [70]. The streamlines of a PDE such as the ones described herein are useful for measuring distances of pixel displacement. They present the ability to map and quantify one-to-one pixel displacement along any boundary. For this dissertation, the displacement is useful for quantifying complexity using the coefficient of variance; however, there are many future avenues of research using other applications of displacement70.

The shape characteristic curve is a useful and robust way to characterize the shape of tumors and possibly many other geometric features. The toolset developed for this work is capable of quickly finding shape characteristic curves for 2D or 3D objects. A number of new findings about shape characteristic curves have been described. Finally, GBM shape is shown to be statistically different in appropriately sampled groups up to one year after diagnosis.

REFERENCES

1. M. Hardisty, L. Gordon, P. Agarwal, T. Skrinskas, and C. Whyne, “Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method,” Med. Phys. 34, 3127-3134 (2007).

2. L. Gorelick, M. Galun, E. Sharon, R. Basri, and Achi Brandt, “Shape Representation and Classification Using the Poisson Equation,” IEEE Trans. Pattern Analysis and Machine Intelligence, 28, 1991-2005 (2006).

3. H. Haidar, S. Bouix, J. Levitt, R. McCarley, M. Shenton, and J. Soul, “Characterizing the Shape of Anatomical Structures with Poisson’s Equation,” IEEE Trans. Med. Imag. 25, 1249-1257 (2006).

4. H. Haidar, S. Egorova, and J. Soul, “New Numerical Solution of the Laplace Equation for Tissue Thickness Measurement in Three-Dimensional MRI,” J. Math. Modelling Algorithms 4, 83-97 (2005).

5. S. Jones, B. Buchbinder, and I. Aharon, “Three-Dimensional Mapping of Cortical Thickness Using Laplace’s Equation,” Hum. Brain Mapp. 11, 12-32 (2000).

6. L. Gorelick, M. Galun, E. Sharon, R. Basri, and A. Brandt, “Shape Representation and Classification Using the Poisson Equation,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., Washington, DC, 2004, 61-67.

7. A. J. Yezzi and J. L. Prince, “An Eulerian PDE Approach for Computing Tissue Thickness,” IEEE Trans. Med. Imag. 22, 1332-1339 (2003).

8. H. Haidar, S. Bouix, J. Levitt, C. Dickey, R. W. McCarley, M. E. Shenton, and J. S. Soul, “Characterizing the Shape of Anatomical Structures with Poisson’s Equation,” in MICCAI, 2004.

9. H. Haidar, S. Bouix, J. Levitt, C. Dickey, R. W. McCarley, M. E. Shenton, and J. S. Soul, “An Elliptic PDE Approach for Shape Characterization,” in Proc. 26th Annual Int’l. Conf. IEEE EMBS, San Francisco, CA, 2004, 1521-1524.

10. M. Hardisty, L. Gordon, P. Agarwal, T. Skrinskas, and C. Whyne, “Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method,” Med. Phys. 34, 3127-3134 (2007).

11. C. Whyne, M. Hardisty, F. Wu, and T. Skrinskas, “Quantitative characterization of metastatic disease in the spine. Part II. Histogram-based analyses,” Med. Phys. 34, 3279-3285 (2007).

12. J. H. Bramble, “Fourth-Order Finite Difference Analogues of the Dirichlet Problem for Poisson's Equation in Three and Four Dimensions,” Mathematics of Computation, 17, 217-222 (1963).

13. J. K. Shultis and R. E. Faw, “Fundamentals of Nuclear Science and Engineering,” New York, Marcel Dekker, 2002.

14. J.P. Guyon, M. Foskey, J. Kim, Z. Firat, B. Davis, K. Haneke and S.R. Aylward, “VETOT, Volume Estimation and Tracking Over Time: Framework and Validation,” MICCAI 2003, LNCS 2879, 142-149 (2003).

15. S.M. Haney, P.M. Thompson, T.F. Cloughesy, J.R. Alger, and A.W. Toga, “Tracking Tumor Growth Rates in Patients with Malignant Gliomas: A Test of Two Algorithms,” Am. J. Neuroradiol. 22, 73–82, (2001).

16. L.P. Clarke, R.P. Velthuizen, M. Clark, J. Gaviria, L. Hall, D. Goldgof, R. Murtagh, S. Phuphanich and S. Brem, "MRI Measurement of Brain Tumor Response: Comparison of Visual Metric and Automatic Segmentation," Magn. Reson. Imaging. 16, 271–279 (1998).

17. N.J. Yue, S. Kim, S. Jabbour, V. Narra, and B.G. Haffty, “A strategy to objectively evaluate the necessity of correcting detected target deviations in image guided radiotherapy,” Med. Phys. 34, 4340-4347 (2007).

18. D. Yan, J. Wong, F. Vicini, J. Michalski, C. Pan, A. Frazier, E. Horwitz and A. Martinez, “Adaptive modification of treatment planning to minimize the deleterious effects of treatment setup errors,” Int. J. Radiat. Oncol., Biol., Phys. 38, 197-206 (1997).

19. Q. Wang, E.K. Liacouras, E. Miranda, U.S. Kanamalla, V. Megalooikonomou, “Classification of Brain Tumors Using MRI and MRS Data,” Proc. of SPIE Vol. 6514, 65140S (2007).

20. R. Silipo; G. Deco, and H. Bartsch, “Brain tumor classification based on EEG hidden dynamics,” Intelligent Data Analysis 3, 287-306 (1999).

21. C. Majós, M. Julià-Sapé, J. Alonso, M. Serrallonga, C. Aguilera, J. Acebes, C. Arús and J. Gili, “Brain Tumor Classification by Proton MR Spectroscopy: Comparison of Diagnostic Accuracy at Short and Long TE,” Am. J. Neuroradiol. 25, 1696-1704 (2004).

22. L. Lukas, A. Devos, J.A.K. Suykens, L. Vanhamme, F.A. Howe,C. Majo´s, A. Moreno-Torres, M. Van Der Graaf, A.R. Tate, C. Aru´s, and S. Van Huffel, “Brain tumor classification based on long echo proton MRS signals,” Artificial Intelligence in Med. 31, 73-89 (2004).

23. Tsunoda A, Komatsuzaki A, Suzuki Y, Muraoka H., "Three-dimensional imaging of the internal auditory canal in patients with acoustic neuroma," Acta. Otolaryngol. Suppl. 542, 6-8 (2000).

24. J.D. Bourland and Q.R. Wu, “Use of shape for automated, optimized 3D radiosurgical treatment planning,” Lect. Notes Comput. Sc. 1131, 553-558 (1996).

25. J.Q. Wu and J.D. Bourland, “Morphology-guided radiosurgery treatment planning and optimization for multiple isocenters,” Med. Phys. 26, 2151-2160 (1999).

26. J.Q. Wu and J.D. Bourland, “Three-dimensional skeletonization for computer-assisted treatment planning in radiosurgery,” Comput. Med. Imaging. Graph 24, 243-251 (2000).

27. J.Q. Wu and J.D. Bourland, “A study and automatic solution for multi-shot treatment planning for the Gamma Knife,” J. Radiosurgery 3, 77-84 (2000).

28. S.M. Pizer, P.T. Fletcher, S. Joshi, A.G. Gash, J. Stough, A. Thall, G. Tracton, and E.L. Chaney, “A method and software for segmentation of anatomic object ensembles by deformable m-reps,” Med. Phys. 32, 1335-1345 (2005).

29. J.M. Gauch and S.M. Pizer, “The intensity axis of symmetry and its application to image segmentation,” IEEE Trans. Patt. Anal. Machine Intell. 15, 753-770 (1993).

30. M. Styner and G. Gerig, “Hybrid boundary-medial shape description for biologically variable shapes,” MMBIA, 2000.

31. C. Lu, S.M. Pizer, S. Joshi, and J. Jeong, “Statistical multi-object shape models,” Int’l J. Comp. Vision 75, 387-404 (2007).

32. M. Styner, J.A. Lieberman, D. Pantazis, and G. Gerig, “Boundary and medial shape analysis of the hippocampus in schizophrenia,” Med. Image Anal. 8, 197-203 (2004).

33. M. Styner, G. Gerig, J. Lieberman, D. Jones, and D. Weinberger, “Statistical shape analysis of neuroanatomical structures based on medial models,” Med. Image Anal. 7, 207-220 (2003).

34. H. Blum and R. Nagel, “Shape description using weighted symmetric axis features,” Pattern Recognit. 10, 167-180 (1978).

35. M.E. Leventon, W.E.L. Grimson, and O. Faugeras, “Statistical shape influence in geodesic active contours,” In Proc. IEEE Conf. Comp. Vision and Patt. Recog., 316-323 (2000).

36. V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” Int’l J. Comp. Vision 22, 61-79 (1997).

37. V. Caselles, J. Morel, G. Sapiro, and A. Tannenbaum, “Introduction to the special issue on PDEs and geometry-driven diffusion in image processing and analysis,” IEEE Trans. on Image Processing 7, 269-274 (1998).

38. Y. Guo and B. Vemuri, “Hybrid geometric active models for shape recovery in medical images,” In Int’l Conf. Inf. Proc. In Med. Imaging, 322-333 (1999).

39. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Int’l J. Comp. Vision 1, 321-331 (1988).

40. A. Kichenassamy, A. Kumar, P. Olver, A. Tannenbaum, and A. Yezzi, “Gradient flows and geometric active contour models,” In IEEE Int’l Conf. Comp. Vision, 810-815 (1995).

41. L. Alvarez, P.L. Lions, and J.M. Morel, “Image selective smoothing and edge detection by nonlinear diffusion,” SIAM J. Numer. Anal. 29, 845-866 (1992).

42. D. Gabor, “Information theory in electron microscopy,” Lab. Investig. 14, 801-807 (1965).

43. A.K. Jain, “Partial differential equations and finite-difference methods in image processing, part 1: Image representation,” J. Optim. Theory Appl. 23, 65-91 (1977).

44. J.J. Koenderink, “The structure of images,” Biol. Cybern. 50, 363-370 (1984).

45. B. ter Haar Romeny, Ed., Geometry Driven Diffusion in Computer Vision. Boston, M.A.: Kluwer (1994).

46. A.P. Witkin, “Scale-space filtering,” in Proc. Int. Joint Conf. Artif. Intell., 1983, 1019-1021 (1983).

47. A.M. Bruaset, X. Cai, H.P. Langtangen and A. Tveito, “Numerical solution of PDEs on parallel computers utilizing sequential simulators,” Lect. Notes Comput. Sc. 1343, 161-168 (1997).

48. J.C. Strikwerda, Finite Difference Schemes and Partial Differential Equations, Philadelphia, PA: SIAM (2004).

49. L. Johnson and R. Riess, Numerical Analysis, Reading, MA: Addison-Wesley (1982).

50. W. Press, B. Flannery, S. Teukolsky, and W. Vetterling, Numerical Recipies: The Art of Scientific Computing. Cambridge, U.K.: Cambridge Univ. Press (1988).

51. A. Jemal, R. Siegel, E. Ward, T. Murray, J. Xu, C. Smigal, M.J. Thun, “Cancer statistics, 2006,” CA Cancer J. Clin. 56, 106-30 (2006).

52. H. Liu, P. Balter, T. Tutt, B. Choi, J. Zhang, C. Wang, M. Chi, D. Luo, T. Pan, S. Hunjan, et al., “Assessing Respiration-Induced Tumor Motion and Internal Target Volume Using Four-Dimensional Computed Tomography for Radiotherapy of Lung Cancer,” Int. J. Radiat. Oncol., Biol., Phys. 68, 531-540 (2007).

53. E. Khain and L.M. Sander, “Dynamics and Pattern Formation in Invasive Tumor Growth,” Phys. Rev. Lett. 96, 188103 (2006).

54. E.B. Claus, M.L. Bondy, J.M. Schildkraut, J.L. Wiemels, M. Wrensch, and P.M. Black, “Epidemiology of intracranial meningioma,” Neurosurg. 57, 1088-1095 (2005).

55. A. Behin, K. Hoang-Xuan, A.F. Carpentier and J. Delattre, “Primary brain tumours in adults,” Lancet 361, 323-331 (2003).

56. S. Sathornsumetee, J.N. Rich, D.A. Reardon, "Diagnosis and treatment of high-grade astrocytoma," Neurol. Clin. 25, 1111-1139 (2007).

57. M.J. Tait, V. Petrik, A. Loosemore, B.A. Bell, M.C. Papadopoulos, "Survival of patients with glioblastoma multiforme has not improved between 1993 and 2004: analysis of 625 cases," Br. J. Neurosurg. 21, 496-500 (2007).

58. T. R. Mackie et al., “Image guidance for precise conformal radiotherapy,” Int'l. J. Radiat. Oncol., Biol., Phys. 56, 89–105 (2003).

59. P. Bergström and A. Widmark, “High-precision conformal radiotherapy (HPCRT) of prostate cancer—A new technique for exact positioning of the prostate at the time of treatment,” Int’l. J. Radiat. Oncol., Biol., Phys. 42, 305–311 (1998).

60. D. A. Jaffray, J. H. Siewerdsen, J. W. Wong, and A. A Martinez, “Flatpanel cone-beam computed tomography for image-guided radiation therapy,” Int'l. J. Radiat. Oncol., Biol., Phys. 53, 1337-1349 (2002).

61. M. Guckenberger, K. Baier, I. Guenther, A. Richter, J. Wilbert, O. Sauer, D. Vordermark and M. Flentje, "Reliability of the bony anatomy in image-guided stereotactic radiotherapy of brain metastases," Int'l. J. Radiat. Oncol., Biol., Phys. 69, 294-301 (2007).

62. W.Y. Song, E. Wong, G.S. Bauman, J.J. Battista, and J. Van Dyk, "Dosimetric evaluation of daily rigid and nonrigid geometric correction strategies during on-line image-guided radiation therapy (IGRT) of prostate cancer," Med. Phys. 34, 352-365 (2007).

63. I.E. Naqa, D. Yang, A. Apte, D. Khullar, S. Mutic et al., "Concurrent multimodality image segmentation by active contours for radiotherapy treatment planning," Med. Phys. 34, 4738-4749 (2007).

64. L. Feuvret, G. Noel, J. Mazeron, and P. Bey, “Conformity index: a review,” Int’l. J. Radiat. Oncol., Biol., Phys. 64, 333-342 (2006).

65. P. Golland and B. Fischl, "Permutation Tests for Classification: Towards Statistical Significance in Image-Based Studies," IPMI 2003, 330–341 (2003).

66. A.P. Holmes, R.C. Blair, G. Watson J.D. and I. Ford, "Nonparametric Analysis of Statistic Images from Functional Mapping Experiments," J. Cerebral Blood Flow & Metabolism, 16, 7-22 (1996).

67. Y. Viniotis, Probability and Random Processes for Electrical Engineers, New York, NY: WCB/McGraw-Hill (1998).

68. T. Nichols and A.P. Holmes, “Nonparametric Permutation Tests For Functional Neuroimaging: A Primer with Examples”, Human Brain Mapping, 15, 1–25 (2001).

69. E.S. Edgington, "Approximate randomization tests," J. Psychol., 72, 143–149 (1969).

70. B.J. Sintay and J.D. Bourland, “A PDE approach for quantifying and visualizing tumor progression and regression,” Proc. Of SPIE (2009). (Accepted)

71. C.J. Vyborny, T. Doi, K.F. O'Shaughnessy, H.M. Romsdahl, A.C. Schneider, and A.A. Stein, "Breast Cancer: Importance of Spiculation in Computer-aided Detection," Radiology, 215(3), 703-707 (2000).

72. A.J. Evans, S.E. Pinder, J.J. James, I.O. Ellis, and E. Cornford, "Is mammographic spiculation an independent, good prognostic factor in screening-detected invasive breast cancer?"AJR Am. J. Roentgenol., 187(5), 1377-80 (2006).

73. B. Sahiner, H.P. Chan, N. Petrick, M.A. Helvie, L.M. Hadjiiski, "Improvement of mammographic mass characterization using spiculation meausures and morphological features," Med. Phys., 28(7), 1455-65, (2001).

74. S.L. Sailer, J.G. Rosenman, M. Soltys, T.J. Cullip, and J. Chen, "Improving treatment planning accuracy through multimodality imaging," Int’l. J. Radiat. Oncol. Biol. Phys., 35(1), 117-24 (1996).

75. G.P. Beyer, R.P. Velthuizen, F.R. Murtagh, J.L. Pearlman, "Technical aspects and evaluation methodology for the application of two automated brain MRI tumor segmentation methods in radiation therapy planning," Magn. Reson. Imaging, 24(9), 1167-78. (2006).

76. C. Brechbuhler, G. Gerig, and O. Kubler, “Parameterization of closed surfaces for 3-D shape description,” Computer Vision, Graphics, Image Processing: Image Understanding, 61, 154-170 (1995).

77. R. Munbodh, Z. Chen, D.A. Jaffray, D.J. Moseley, J. Knisely, and J.S. Duncan, “A frequency-based approach to location common structure for 2D-3D intensity-based registration of setup images in prostate radiotherapy,” Med. Phys., 34(7), 3005-3017 (2007).

78. C.G. Small, “Techniques of Shape Analysis on Sets of Points,” Int’l. Stat. Rev., 56(3), 243-257 (1988).

79. T.F. Cootes, C.J. Taylor, D.H. Cooper, and J. Graham, J., "Training models of shape from sets of examples," Proc. British Machine Vision Conf., Springer, Berlin, 9–18, (1992).

80. I.E. Naga, D. Yang, A. Apte, D. Khullar, S. Mutic, J. Zheng, J.D. Bradley, P. Grigsby, and J.O. Deasy, “Concurrent multimodality image segmentation by active contours for radiotherapy treatment planning,” Med. Phys. 34, 4738 (2007).

81. M.B. Sharpe, T. Craig, D.J. Moseley, “Image Guidance Treatment Target Localization Systems in IMRT-IGRT-SBRT – Advances in the Treatment Planning and Delivery of Radiotherapy,” Frontiers Radiat. Ther. Oncol., 40 (2007).

82. F.M. Khan, The Physics of Radiation Therapy, Lippincott Williams & Wilkins (2003).

83. D.A. Jaffray and J.J. Siewerdsen, “Cone-beam computed tomography with a flat-panel imager: Initial performance characterization,” Med. Phys. 27, 1311 (2000).

84. W.R. Hendee and E.R. Ritenour, Medical Imaging Physics, Wiley-Liss (2002).

APPENDIX A: INDEX OF IMPORTANT ROUTINES AND CODE

The following list of experimental scripts and core functions outlines the most important toolset code. It does not include all scripts for brevity, but all notable scripts are included. Experiment scripts are inline code used to run tests and studies. Core functions are modular pieces of code that run the various algorithms used by this method.

EXPERIMENT SCRIPTS

experiment_clinical_trial.m – Solves Poisson’s equation and finds the shape characteristic curves for all subjects in the clinical trial as identified in the “Clinical Data/trials” directory. All data are stored separately into .mat files. Trial data is in the Analyze .img format with associated .hdr files.

experiment_clinical_trial_survival_updated.m – Loads the desired set of survival data (a .mat file located in the “Clinical Data/trials” directory) and the shape characteristic curves corresponding to the subjects in the survival file. It then performs a statistical permutation test iteratively for a specified number of months (from 1 to 18 in this study) to study the significance of grouping shape characteristic curves based on length of survival (see Chapter ‎7).

experiment_direct_method_circle.m – Creates an ideal 2D circle and generates the solution to Poisson’s equation analytically and iteratively using finite differences to compare the error.

experiment_direct_method_square.m – Creates an ideal 2D square and generates the solution to Poisson’s equation analytically and iteratively using finite differences to compare the error.

experiment_ibsr_rl.m – Loads the specified anatomical segmentations from the IBSR data, computes the shape characteristic curves, and plots the comparison.

experiment_movie.m – Generates a movie of the iterative solutions for Poisson’s equation.

experiment_multi_shapes.m – Runs a test of 11 different variations on multi-shape images and plots the shape characteristic curves for comparison.

experiment_permutation_2d_synthetics_catdog.m – Compares the shape characteristic curves for 18 cats and 18 dogs and establishes significance using a Monte-Carlo permutation test.

experiment_phantom.m – Loads 3D MRI phantoms converted to binary maps: three cube-like and six sphere-like shapes. Each shape is analyzed in coarse, normal, and resliced geometries. Comparisons are made among the shape classes and between spheres and cubes via Monte-Carlo permutation tests. Ideal spheres and cubes are also modeled for comparison.

experiment_synthetic_shapes.m – Loads a variety of 2D synthetic shapes and plots the shape characteristic curves for each.

CORE FUNCTIONS

sintay_image2bmap - Converts an image file into a binary map. Tested for JPG files only at this time, but could work for all files.

[bmap, I] = sintay_image2bmap(filename, level)

sintay_permutation_test - Perform a mean-based Monte-Carlo permutation test on the data from groups A and B with the number of permutations equal to 'permutations'. The data are regrouped many times in a random fashion and compared back to the original difference metric to find statistical significance of the grouping. The difference metric is the L2 norm.

[mean_a, mean_b, p, Di, D0] = sintay_permutation_test(group_a, group_b, permutations)

sintay_permutation_test_median - Perform a median-based Monte-Carlo permutation test on the data from groups A and B with the number of permutations equal to 'permutations'. The data are regrouped many times in a random fashion and compared back to the original difference metric to find statistical significance of the grouping. The difference metric is the L2 norm.

[median_a, median_b, p, Di, D0] = sintay_permutation_test_median(group_a, group_b, permutations)

sintay_psc2d - This function walks through all the steps needed to take a binary map and find the shape characteristic curve in 2D.

[img, u, d, nu, E_levels] = sintay_psc2d(image2d)

sintay_psc2d_characterization - Generates the shape characteristic curve based on the potential and displacement map (both 2D data) with a sample rate of E_sample.

[nu, E_levels] = sintay_psc2d_characterization(u,d,E_sample)

sintay_psc2d_displacement - Computes the displacement map along the surface of the potential based on the input T-maps in 2D. This is also known as the geodesic distance.

D = sintay_psc2d_displacement(Tx,Ty)

sintay_psc2d_level_map - Returns a binary map of the area at or above the level specified by “min_level.” Assumes the image is a normalized matrix of values ranging from 0 to 1.

bmap = sintay_psc2d_level_map(image, min_level)

sintay_psc2d_poisson_fd - Computes the 2D solution to Poisson's equation, also known as the potential, using finite differences.

u = sintay_psc2d_poisson_fd(bmap2d)

sintay_psc2d_tmaps - Finds the 2D T-maps for a given gradient and area of interest.

[Tx,Ty] = sintay_psc2d_tmaps(gx,gy, image2d)

sintay_psc2d_trim - Trims a 2D binary dataset down to the minimum size that contains the area of interest.

image2d_trim = sintay_psc2d_trim(image2d)

sintay_psc3d - This function walks through all the steps needed to take a binary map and find the shape characteristic curve in 3D. Interpolating functionality is not used by default.

[img, u, d, nu, E_levels] = sintay_psc3d(image3d)

sintay_psc3d_characterization - Generates the shape characteristic curve based on the potential and displacement map (both 3D data) with a sample rate of “E_sample.”

[nu, E_levels] = sintay_psc3d_characterization(u,d,E_sample)

sintay_psc3d_displacement - Computes the displacement map along the surface of the potential based on the input T-maps in 3D. This is also known as the geodesic distance.

D = sintay_psc3d_displacement(Tx,Ty,Tz)

sintay_psc3d_level_map - Returns a binary map of the area at or above the level specified by “min_level”. Assumes the image is a normalized matrix of values ranging from 0 to 1.

bmap = sintay_psc3d_level_map(image, min_level)

sintay_psc3d_poisson_fd - Computes the 3D solution to Poisson's equation, also known as the potential, using finite differences.

u = sintay_psc3d_poisson_fd(bmap3d)

sintay_psc3d_tmaps - Finds the 3D T-maps for a given gradient and volume of interest.

[Tx,Ty,Tz] = sintay_psc3d_tmaps(gx,gy,gz, image3d)

sintay_psc3d_trim - Trims a 3D binary dataset down to the minimum size that contains the volume of interest.

image3d_trim = sintay_psc3d_trim(image3d)

APPENDIX B: TOOLSET CODE

% sintay_image2bmap

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Converts an image file into a binary map. Tested for JPG files only at

% this time, but could work for all files.

%

% INPUT

% filename - name/location of file

% level - the value at which greater than is 1 and less than is 0

%

% OUTPUT

% bmap - binary map (logical)

% I - processed image

function [bmap, I] = sintay_image2bmap(filename, level)

I = imread(filename);

I = 1-mat2gray(I); % invert

I = padarray(I,[1 1],'both'); % pad

bmap = sintay_psc2d_level_map(I, level);

% sintay_permutation_test

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Perform a mean-based Monte-Carlo permutation test on the data from groups

% a and b with the number of permutations equal to 'permutations'. The

% data is regrouped many times in a random fashion and compared back to the

% original difference metric to find statistical significance of the

% grouping. The difference metric is the L2 norm.

%

% INPUT

% group_a - items in dimension 1, feature vectors in dimension 2

% group_b - items in dimension 1, feature vectors in dimension 2

% permutations - number of random permutations

%

% OUTPUT

% mean_a - mean of group a items

% mean_b - mean of group b items

% p - p-value for statistical significance of null hypothesis rejection

% Di - permutation L2-norm results

% D0 - baseline L2-norm

function [mean_a, mean_b, p, Di, D0] = sintay_permutation_test(group_a, group_b, permutations)

% Iterations (N) of permutation

N = permutations;

M = 0;

% Find group means at each descritized point on the curve

mean_a = mean(group_a);

mean_b = mean(group_b);

% Original group sizes

size_a = size(group_a,1);

size_b = size(group_b,1);

% Find true difference

D0 = sqrt(sum((mean_a - mean_b).^2));

% Converge groups

basket = [group_a; group_b];

% Permutation loop

for i = 1:N

permutation = randperm(size(basket,1));

basket_a = basket(permutation(1:size_a),:);

basket_b = basket(permutation(size_a+1:size_a+size_b),:);

mean_basket_a = mean(basket_a);

mean_basket_b = mean(basket_b);

Di(i) = sqrt(sum((mean_basket_a - mean_basket_b).^2));

if Di(i) > D0

M = M + 1;

end

clear basket_a basket_b mean_basket_a mean_basket_b permuation;

end

p = (M)/N;

% should be < p in this case instead of p =, but matlab cannot show this

if p == 0

p = 1/permutations;

end

% sintay_permutation_test_median

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Perform a median-based Monte-Carlo permutation test on the data from groups

% a and b with the number of permutations equal to 'permutations'. The

% data is regrouped many times in a random fashion and compared back to the

% original difference metric to find statistical significance of the

% grouping. The difference metric is the L2 norm.

%

% INPUT

% group_a - items in dimension 1, feature vectors in dimension 2

% group_b - items in dimension 1, feature vectors in dimension 2

% permutations - number of random permutations

%

% OUTPUT

% median_a - median of group a items

% median_b - median of group b items

% p - p-value for statistical significance of null hypothesis rejection

% Di - permutation L2-norm results

% D0 - baseline L2-norm

function [median_a, median_b, p, Di, D0] = sintay_permutation_test_median(group_a, group_b, permutations)

% Iterations (N) of permutation

N = permutations;

M = 0;

% Find group means at each descritized point on the curve

median_a = median(group_a);

median_b = median(group_b);

% Original group sizes

size_a = size(group_a,1);

size_b = size(group_b,1);

% Find true difference

D0 = sqrt(sum((median_a - median_b).^2));

% Converge groups

basket = [group_a; group_b];

% Permutation loop

for i = 1:N

permutation = randperm(size(basket,1));

basket_a = basket(permutation(1:size_a),:);

basket_b = basket(permutation(size_a+1:size_a+size_b),:);

median_basket_a = median(basket_a);

median_basket_b = median(basket_b);

Di(i) = sqrt(sum((median_basket_a - median_basket_b).^2));

if Di(i) > D0

M = M + 1;

end

clear basket_a basket_b median_basket_a median_basket_b permuation;

end

p = (M)/N;

% should be < p in this case instead of p =, but matlab cannot show this

if p == 0

p = 1/permutations;

end

% sintay_psc2d

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% This function walks through all the steps needed to take a binary map and

% find the shape characteristic curve in 2D. Interpolating functionality

% is not used by default.

%

% INPUT

% image2d - binary map of shape to be analyzed

%

% OUTPUT

% img - binary map of final shape analyzed after trim (or interpolation if

% enabled)

% u - the solution to Poisson's equation, also known as the potential

% d - the displacement map

% nu - shape characteristic curve

% E_levels - plotting variable that specifies the discretization of nu

function [img, u, d, nu, E_levels] = sintay_psc2d(image2d)

E_sample = 0.05; % Sample rate for the shape characteristic curve

time_start = clock;

% Trim data

padsize = 2;

image2d = sintay_psc2d_trim(image2d);

image2d = padarray(image2d,[padsize padsize]);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Data trimmed')]);

% Interpolate small volumes

% if min(size(image2d)) < 40

% interp_sample = 0.5;

% [interpX,interpY] = meshgrid(1:size(image2d,2),1:size(image2d,1));

% [interpXI,interpYI] = meshgrid(1:interp_sample:size(image2d,2),1:interp_sample:size(image2d,1));

% image2d = interp2(interpX,interpY,image2d,interpXI,interpYI,'linear');

% image2d = sintay_psc2d_level_map(image2d, 0.5);

% disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Data interpolated')]);

% end

img = image2d;

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Starting Poisson Solver')]);

% Find Poisson using finite differences

u = sintay_psc2d_poisson_fd(image2d);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Poisson found')]);

% Compute gradient

[gx,gy] = gradient(u);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Gradient found')]);

% Compute compontents of normals of gradient components

[Tx,Ty] = sintay_psc2d_tmaps(gx,gy, image2d);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Normalized gradient components found')]);

% Compute displacement map

d = sintay_psc2d_displacement(Tx,Ty);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Displacement map computed')]);

% Find shape characteristic

[nu, E_levels] = sintay_psc2d_characterization(u,d,E_sample);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC2D - Shape characteristic curves computed')]);

% sintay_psc2d_characterization

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Generates the shape characteristic curve based on the potential and

% displacement map (both 2D data) with a sample rate of E_sample.

%

% INPUT

% u - the solution to Poisson's equation, also known as the potential

% d - the displacement map

% E_sample - the sample rate for the curve

%

% OUTPUT

% nu - the shape characteristic curve

% E_levels - plotting variable that specifies the discretization of nu

function [nu, E_levels] = sintay_psc2d_characterization(u,d,E_sample)

% Normalize Poission solution (equipotential map)

E = (min(min(u)) - u)/(min(min(u)) - max(max(u)));

% Initialize

nu_count = 1;

s_count = 1;

E_levels = 0;

nu = 0;

levels = 0:E_sample:1;

% Traverse equipotential surfaces (levels)

for level = 0:E_sample:1

% Create level set map of uxy for given level

level_set = sintay_psc2d_level_map(E, level);

bindex = 0;

for i = 2:size(level_set,1)-1

for j = 2:size(level_set,2)-1

% 8-neighbor test

if (level_set(i,j) == 1) && ( ...

(level_set(i ,j+1) == 0) || ...

(level_set(i ,j-1) == 0) || ...

(level_set(i+1,j ) == 0) || ...

(level_set(i+1,j+1) == 0) || ...

(level_set(i+1,j-1) == 0) || ...

(level_set(i-1,j ) == 0) || ...

(level_set(i-1,j+1) == 0) || ...

(level_set(i-1,j-1) == 0) ...

)

bindex = bindex + 1;

B(bindex,:) = [i,j];

end

end

end

% Grab values from D

if(bindex > 0)

for i = 1:(size(B,1))

if(~isnan(d(B(i,1),B(i,2))))

SD(s_count) = d(B(i,1),B(i,2));

s_count = s_count + 1;

end

end

% Calculate nu and increment

if (s_count) > 1

nu(nu_count) = std(SD)/mean(SD);

E_levels(nu_count) = level;

nu_count = nu_count + 1;

s_count = 1;

end

else

nu(nu_count) = 0;

E_levels(nu_count) = level;

nu_count = nu_count + 1;

s_count = 1;

end

% Clear variables

clear B level_set SD;

end

% Account for empty set at 1

if(nu_count == length(levels))

nu(nu_count) = 0;

E_levels(nu_count) = 1;

end

% sintay_psc2d_displacement

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Computes the displacement map along the surface of the potential based on

% the input T-maps. This is also known as the geodesic distance.

%

% INPUT

% Tx - T-map of x, the normalized gradient component (of u) for x

% Ty - T-map of y, the normalized gradient component (of u) for y

%

% OUTPUT

% D - the displacement map

function D = sintay_psc2d_displacement(Tx,Ty)

% Upwind differencing (negate for downwind)

w_dx_pos = -1; w_dx_neg = 1;

w_dy_pos = -1; w_dy_neg = 1;

% Setup D0

D = zeros(size(Tx));

% Calculate denominator of eqn (3) in Haidar paper

TT = abs(Tx)+abs(Ty);

% Find array of finite pixels

[iy,ix] = ind2sub(size(Tx),find(isfinite(Tx)));

% Setup variables

i = 0; continue_traversal = 1;

% Iterative process

while continue_traversal == 1

i = i + 1;

prevD = D;

% Cycle through points inside geometric boundary only

for j = 1:length(iy)

% Indice assignment

y = iy(j); x = ix(j);

% Winding scheme

if Tx(y,x) > 0, dx = w_dx_pos;

else dx = w_dx_neg; end

if Ty(y,x) > 0, dy = w_dy_pos;

else dy = w_dy_neg; end

% If no gradient component in this direction, set to zero

if Tx(y,x) == 0, dx = 0; end

if Ty(y,x) == 0, dy = 0; end

% Final check to make sure update point is inside of the

% matrix D

if ( ...

((x+dx) ~= 0) && ...

((y+dy) ~= 0) ...

) ...

&& ...

( ...

((x+dx) = 5000)

continue_traversal = 0;

end

end

% Echo number of iterations

u_iterations = k

% sintay_psc2d_tmaps

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Finds the T-maps for a given gradient and area of interest.

%

% INPUT

% gx - x gradient component

% gy - y gradient component

% image2d - binary map of the area of interest

%

% OUTPUT

% Tx - normalized x component

% Ty - normalized y component

function [Tx,Ty] = sintay_psc2d_tmaps(gx,gy, image2d)

% Initial calc

Tx = zeros(size(gx)); Ty = zeros(size(gy));

for i = 1:size(gx,1)

for j = 1:size(gy,2)

if (isfinite(gx(i,j)) == 1)&&(image2d(i,j) == 1)

Tx(i,j) = gx(i,j)/sqrt(eps+(gx(i,j)^2) + (gy(i,j)^2));

Ty(i,j) = gy(i,j)/sqrt(eps+(gx(i,j)^2) + (gy(i,j)^2));

if (Tx(i,j) == 0) && (Ty(i,j) == 0)

Tx(i,j) = NaN;

Ty(i,j) = NaN;

end

else

Tx(i,j) = NaN;

Ty(i,j) = NaN;

end

end

end

% Re-normalization

Tx2 = zeros(size(Tx)); Ty2 = zeros(size(Tx));

Tx2(isnan(Tx)) = NaN; Ty2(isnan(Tx)) = NaN;

for i = 1:size(Tx2,1)

for j = 1:size(Tx2,2)

if (isfinite(gx(i,j)) == 1)&&(image2d(i,j) == 1)

Tx2(i,j) = Tx(i,j)/sqrt((Tx(i,j)^2) + (Ty(i,j)^2));

Ty2(i,j) = Ty(i,j)/sqrt((Tx(i,j)^2) + (Ty(i,j)^2));

end

end

end

Tx = Tx2; Ty = Ty2;

% sintay_psc2d_trim

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Trims a binary dataset down to the minimum size that contains the area of

% interest.

%

% INPUT

% image2d - binary map

%

% OUTPUT

% image2d_trim - trimmed binary map

function image2d_trim = sintay_psc2d_trim(image2d)

i_first = 1;

i_last = size(image2d,1);

j_first = 1;

j_last = size(image2d,2);

i = 0;

continue_search = 1;

while continue_search == 1

i = i + 1;

if sum(sum(abs(image2d(i,:)))) ~= 0

i_first = i;

continue_search = 0;

end

end

i = i_last+1;

continue_search = 1;

while continue_search == 1

i = i - 1;

if sum(sum(abs(image2d(i,:)))) ~= 0

i_last = i;

continue_search = 0;

end

end

j = 0;

continue_search = 1;

while continue_search == 1

j = j + 1;

if sum(sum(abs(image2d(:,j)))) ~= 0

j_first = j;

continue_search = 0;

end

end

j = j_last+1;

continue_search = 1;

while continue_search == 1

j = j - 1;

if sum(sum(abs(image2d(:,j)))) ~= 0

j_last = j;

continue_search = 0;

end

end

image2d_trim = image2d(i_first:i_last,j_first:j_last);

% sintay_psc3d

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% This function walks through all the steps needed to take a binary map and

% find the shape characteristic curve in 3D. Interpolating functionality

% is not used by default.

%

% INPUT

% image3d - 3D binary map of shape to be analyzed

%

% OUTPUT

% img - binary map of final shape analyzed after trim (or interpolation if

% enabled)

% u - the solution to Poisson's equation, also known as the potential

% d - the displacement map

% nu - shape characteristic curve

% E_levels - plotting variable that specifies the discretization of nu

function [img, u, d, nu, E_levels] = sintay_psc3d(image3d)

E_sample = 0.05; % Sample rate for the shape characteristic curve

time_start = clock;

% Trim data

padsize = 2;

image3d = sintay_psc3d_trim(image3d);

image3d = padarray(image3d,[padsize padsize padsize]);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Data trimmed')]);

images_size = size(image3d)

% Find Poisson using finite differences

img = image3d;

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Starting Poisson Solver')]);

u = sintay_psc3d_poisson_fd(image3d);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Poisson found')]);

% Compute gradient

[gx,gy,gz] = gradient(u);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Gradient found')]);

% Compute compontents of normals of gradient components

[Tx,Ty,Tz] = sintay_psc3d_tmaps(gx,gy,gz, image3d);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Normalized gradient components found')]);

% Compute displacement map

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Computing displacement map...')]);

d = sintay_psc3d_displacement(Tx,Ty,Tz);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Displacement map computed')]);

% Find shape characteristic

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Computing shape characteristic curve...')]);

[nu, E_levels] = sintay_psc3d_characterization(u,d,E_sample);

disp([num2str(etime(clock, time_start)), sprintf('\t - PSC3D - Shape characteristic curves computed')]);

% sintay_psc3d_characterization

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Generates the shape characteristic curve based on the potential and

% displacement map (both 3D data) with a sample rate of E_sample.

%

% INPUT

% u - the solution to Poisson's equation, also known as the potential

% d - the displacement map

% E_sample - the sample rate for the curve

%

% OUTPUT

% nu - the shape characteristic curve

% E_levels - plotting variable that specifies the discretization of nu

function [nu, E_levels] = sintay_psc3d_characterization(u,d,E_sample)

% Normalize Poission solution (equipotential map)

E = (min(min(min(u))) - u)/(min(min(min(u))) - max(max(max(u))));

% Initialize

nu_count = 1;

E_levels = 0;

nu = 0;

levels = 0:E_sample:1;

% Traverse equipotential surfaces (levels)

for level = 0:E_sample:1

% Create level set map of uxy for given level

%level_set( (E >= level) & (E > 0) ) = 1; % NON-RECURSIVE BUT SLOWER ON LARGE DATASETS!

level_set = sintay_psc3d_level_map(E, level);

% Find perimeter

level_set_perim = bwperim(level_set,26);

% Gather surface values from D

SD = d(level_set_perim);

% Calculate nu and increment

if length(SD) > 1

nu(nu_count) = std(SD)/mean(SD);

if(isnan(nu(nu_count)))

nu(nu_count) = 0;

end

E_levels(nu_count) = level;

nu_count = nu_count + 1;

end

% Clear variables

clear level_set SD;

end

% Account for empty set at 1

if(nu_count == length(levels))

nu(nu_count) = 0;

E_levels(nu_count) = 1;

end

% sintay_psc3d_displacement

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Computes the displacement map along the surface of the potential based on

% the input T-maps. This is also known as the geodesic distance.

%

% INPUT

% Tx - T-map of x, the normalized gradient component (of u) for x

% Ty - T-map of y, the normalized gradient component (of u) for y

% Tz - T-map of z, the normalized gradient component (of u) for z

%

% OUTPUT

% D - the displacement map

function D = sintay_psc3d_displacement(Tx,Ty,Tz)

% Upwind differencing (negate for downwind)

w_dx_pos = -1; w_dx_neg = 1;

w_dy_pos = -1; w_dy_neg = 1;

w_dz_pos = -1; w_dz_neg = 1;

% Setup D0

D = zeros(size(Tx));

% Calculate denominator of eqn (3) in Haidar paper

TT = abs(Tx)+abs(Ty)+abs(Tz);

% Find array of finite pixels

[iy,ix,iz] = ind2sub(size(Tx),find(isfinite(Tx)));

% Setup variables

i = 0; continue_traversal = 1;

% Iterative process

while continue_traversal == 1

i = i + 1;

prevD = D;

% Cycle through points inside geometric boundary only

for j = 1:length(iy)

% Indice assignment

y = iy(j); x = ix(j); z = iz(j);

% Winding scheme

if Tx(y,x,z) > 0, dx = w_dx_pos;

else dx = w_dx_neg; end

if Ty(y,x,z) > 0, dy = w_dy_pos;

else dy = w_dy_neg; end

if Tz(y,x,z) > 0, dz = w_dz_pos;

else dz = w_dz_neg; end

% If no gradient component in this direction, set to zero

if Tx(y,x,z) == 0, dx = 0; end

if Ty(y,x,z) == 0, dy = 0; end

if Tz(y,x,z) == 0, dz = 0; end

% Final check to make sure update point is inside of the

% matrix D

if ( ...

((x+dx) ~= 0) && ...

((y+dy) ~= 0) && ...

((z+dz) ~= 0) ...

) ...

&& ...

( ...

((x+dx) = 50000)

continue_traversal = 0;

end

end

% Echo number of iterations

u_iterations = k

% sintay_psc3d_tmaps

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Finds the T-maps for a given gradient and volume of interest.

%

% INPUT

% gx - x gradient component

% gy - y gradient component

% gz - z gradient component

% image2d - binary map of the volume of interest

%

% OUTPUT

% Tx - normalized x component

% Ty - normalized y component

% Tz - normalized z component

function [Tx,Ty,Tz] = sintay_psc3d_tmaps(gx,gy,gz, image3d)

Tx = zeros(size(gx)); Ty = zeros(size(gy)); Tz = zeros(size(gz));

for i = 1:size(gx,1)

for j = 1:size(gy,2)

for k = 1:size(gz,3)

if (isfinite(gx(i,j,k)) == 1)&&(image3d(i,j,k) == 1)

Tx(i,j,k) = gx(i,j,k)/sqrt(eps+(gx(i,j,k)^2) + (gy(i,j,k)^2) + (gz(i,j,k)^2));

Ty(i,j,k) = gy(i,j,k)/sqrt(eps+(gx(i,j,k)^2) + (gy(i,j,k)^2) + (gz(i,j,k)^2));

Tz(i,j,k) = gz(i,j,k)/sqrt(eps+(gx(i,j,k)^2) + (gy(i,j,k)^2) + (gz(i,j,k)^2));

if( Tx(i,j,k) == 0 ) && (Ty(i,j,k) == 0) && (Tz(i,j,k) == 0)

Tx(i,j,k) = NaN;

Ty(i,j,k) = NaN;

Tz(i,j,k) = NaN;

end

else

Tx(i,j,k) = NaN;

Ty(i,j,k) = NaN;

Tz(i,j,k) = NaN;

end

end

end

end

% Re-normalization

Tx2 = zeros(size(Tx)); Ty2 = zeros(size(Tx)); Tz2 = zeros(size(Tx));

Tx2(isnan(Tx)) = NaN; Ty2(isnan(Tx)) = NaN; Tz2(isnan(Tx)) = NaN;

for i = 1:size(Tx2,1)

for j = 1:size(Tx2,2)

for k = 1:size(Tx2,3)

if (isfinite(gx(i,j,k)) == 1)&&(image3d(i,j,k) == 1)

Tx2(i,j,k) = Tx(i,j,k)/sqrt((Tx(i,j,k)^2) + (Ty(i,j,k)^2) + (Tz(i,j,k)^2));

Ty2(i,j,k) = Ty(i,j,k)/sqrt((Tx(i,j,k)^2) + (Ty(i,j,k)^2) + (Tz(i,j,k)^2));

Tz2(i,j,k) = Tz(i,j,k)/sqrt((Tx(i,j,k)^2) + (Ty(i,j,k)^2) + (Tz(i,j,k)^2));

end

end

end

end

Tx = Tx2; Ty = Ty2; Tz = Tz2;

% sintay_psc3d_trim

%

% Created by: Benjamin Sintay (bjsintay@), 2006-2008

%

% DESCRIPTION

% Trims a binary dataset down to the minimum size that contains the volume

% of interest.

%

% INPUT

% image3d - binary map

%

% OUTPUT

% image3d_trim - trimmed binary map

function image3d_trim = sintay_psc3d_trim(image3d)

i_first = 1;

i_last = size(image3d,1);

j_first = 1;

j_last = size(image3d,2);

k_first = 1;

k_last = size(image3d,3);

i = 0;

continue_search = 1;

while continue_search == 1

i = i + 1;

if sum(sum(abs(image3d(i,:,:)))) ~= 0

i_first = i;

continue_search = 0;

end

end

i = i_last+1;

continue_search = 1;

while continue_search == 1

i = i - 1;

if sum(sum(abs(image3d(i,:,:)))) ~= 0

i_last = i;

continue_search = 0;

end

end

j = 0;

continue_search = 1;

while continue_search == 1

j = j + 1;

if sum(sum(abs(image3d(:,j,:)))) ~= 0

j_first = j;

continue_search = 0;

end

end

j = j_last+1;

continue_search = 1;

while continue_search == 1

j = j - 1;

if sum(sum(abs(image3d(:,j,:)))) ~= 0

j_last = j;

continue_search = 0;

end

end

k = 0;

continue_search = 1;

while continue_search == 1

k = k + 1;

if sum(sum(abs(image3d(:,:,k)))) ~= 0

k_first = k;

continue_search = 0;

end

end

k = k_last+1;

continue_search = 1;

while continue_search == 1

k = k - 1;

if sum(sum(abs(image3d(:,:,k)))) ~= 0

k_last = k;

continue_search = 0;

end

end

image3d_trim = image3d(i_first:i_last,j_first:j_last,k_first:k_last);

VITAE

Benjamin Jeremiah Sintay was born in Wichita Falls, TX on May 31, 1981, son of Gerald and Donna Sintay. His pre-college years were spent in Seattle, WA, until 9 years of age, and Mooresville, NC, through high school in 1999. Benjamin completed two undergraduate degrees in Computer and Electrical Engineering in 2004 at North Carolina State University in Raleigh, NC with concentrated studies in microprocessor design and architecture, computer science, mixed-signal design, and entrepreneurship. His graduate education started the same year, in 2004, at Wake Forest University in Winston-Salem, NC in pursuit of a doctoral degree in Biomedical Engineering. Benjamin engaged in various avenues of research including radiation therapy, medical imaging, biomedical device design, and mechanical nociception. He coauthored two IRB-approved clinical trials related to mechanical pain mechanisms and tumor shape metrics. Benjamin received a Ph.D. in Biomedical Engineering from the Virginia Tech – Wake Forest University School of Biomedical Engineering and Sciences in December of 2008.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download