Vanderbilt University



Vanderbilt University

Department of

Biomedical Engineering

Design of a Hadamard Transform Spectral Imaging System for Brain Tumor Resection Guidance

Group 10

Members:

Paul Holcomb

Tasha Nalywajko

Melissa Walden

Abstract

According to studies completed by the National Institute for Health, brain tumors represent one of the most deadly types of cancers, with an average mortality rate of 71% in diagnosed cases.1 The use of imaging to help locate and resection tumors was shown to increase the median survival rate for patients suffering from glioblastoma multiforme from 19 weeks to 70 weeks.2 While it is clear that use of imaging methods does increase prognosis and lifespan of the patient, current methods are preoperative and cannot take into account shift and deformation of brain tissue during surgery. A fast, intraoperative imaging technique is needed to reduce residual tumor mass left behind without compromising any neurological function. To fulfill this, we propose a Hadamard transform spectral imaging system, which allows differentiation of tumor and normal tissue intraoperatively using differences in the diffuse reflectance spectra of the tissue.3,4 The application of the Hadamard transform is accomplished through use of a digital micromirror device. Use of the Hadamard transform allows for compression of a three dimensional image (two spatial dimensions, one spectral dimension) into a two dimensional image for collection by a CCD camera. The signal to noise ratio of the system is also improved by a factor of [pic]with application of the Hadamard transform, where n is the order of the matrix.5 This system is composed of three stages: a collection stage (Stage 1) where light from the sample is collected, collimated, and then compressed to fit within the 10mm square area of the DMD; a compression stage (Stage 2) where the transformed image from the DMD is compressed in one dimension; and a dispersion stage (Stage 3) where a grating disperses the spectral components of the line image across a CCD camera. Stage 1 has been completed to specifications, but due to the substantial amount of troubleshooting involved in establishing the functioning characteristics of these system components, Stages 2 and 3 were not assembled.

Introduction

According to studies completed by the National Institutes of Health, brain tumors represent one of the deadliest forms of cancer, with an average mortality rate of 71% in diagnosed cases.1 While brain tumors are not the most common form of cancer, with an average prevalence of only 6.6 cases per 100,000 persons per year, their high mortality rate makes it crucial to improve upon current treatment methods.1 In 2002, roughly 40,000 people were expected to develop primary brain tumors in the US, in addition to the 170,000 estimated brain metastases resulting from cancers elsewhere in the body.6 Of the 40,000 primary cases, over 13,000 people died from complications subsequent to malignant brain tumors.6 Studies have shown a correlation between degree of infiltrating tumor margin and patient prognosis, as can be seen in Figure 1.7-9 Lacroix et al performed a study with 416 patients, using magnetic resonance (MR) spectroscopy to visualize tumor necrosis, edema, and tumor module enhancement preoperatively. The findings showed that at least 89% of the tumor must be removed to lengthen prognosis time, while resection of over 98% of the tumor showed significant prognosis improvement. When separated based on their cancer severity, marked differences were found between patients with greater than 98% tumor resection and those with less than 98% tumor resection. One group improved their prognosis from a 10-month average after surgery to a 19-month average (90% increase). In a separate study specific to glioblastoma multiforme (Stage IV cancer), the most aggressive and infiltrative brain tumor, the use of imaging to help locate and resect tumors increased the median survival rate from 19 weeks to 70 weeks.2,8

While it is clear that use of imaging methods does increase prognosis and lifespan of the patient, current methods do not provide distinct information during operation to guarantee the complete resection of the tumor. With these methods, tumor margins are distinctly defined, but due to their preoperative nature there is no guarantee that those margins will not shift during surgery. In order for the surgeon to determine tumor margins, he must do so operatively. During the resection, the surgeon sends tissue samples to pathology for testing. Not only does this increase surgery time (15 minutes per sample sent to pathology), it also increases the probability that either tumor will be left behind or that vital brain tissue will be removed. A fast, intraoperative imaging technique is needed to reduce residual tumor mass left behind without compromising any neurological function.

Besides improving tumor resection and increasing life expectancy for the patient, an imaging system with the qualities mentioned above will also reduce overall cost, both for the patient and the hospital. The operating room is billed per half- or quarter-hour, depending on the hospital. The average tumor resection surgery is about 4 hours, 2.5 of which are spent in the OR (non-inclusive of pathology lab waiting time). The average bill for OR usage ranges from $10,000 - $15,000 dollars, dependent on the rate and length of the procedure (interview). Also, recovery time expenditures are not cheap; a Pennsylvania hospital in 2001 charged $2,152 for 24 hours in an ICU bed and $1,360 for a regular floor bed.10 The recovery time for brain tumor resection is 3-6 days, depending on the amount of tumor resected and tumor location – the shorter the surgery time, the less time needed to spend in the hospital.11 Table 1 shows a comparison of costs for resection and hospitalization for surgical procedures with and without pathology lab testing, assuming $2,500/half hour for OR usage and a two day requirement in the ICU.

Current Methods

While MRIs show high resolution pictures of the brain, they are not functionally optimal for brain surgery. Thus, a better methodology needs to be developed. Lin et al (2000) discovered that brain tumors have different diffuse reflectance spectra than normal brain tissue because of the variance of the protein composition between the two tissues. The differing optical properties can be measured by optical spectroscopy via the diffuse reflectance spectra. In Lin et al (2000 and 2001), a natural fluorescence measurement (F) and diffuse reflectance measurement (Rd) are taken to differentiate between grey matter, white matter, and cancerous tissue. A “Gaser” probe is used to both transmit and collect light from the sample area (Figure 2) with fiber optics. The hexagon represents the 30° polishing angles applied to the fibers to allow for less divergence and to maintain the 1 mm spot size. After a background reading is taken to eliminate readings from ambient light, the pulsed nitrogen laser source (λ = 337 nm) excites the tissue to emit fluorescence; then, a white light source is used to provide Rd readings. The other 5 optic fibers collect the light remitted from the tissue and send it to a spectrograph for a composite measurement of the tissue spectra. The advantage of optical point spectroscopy is the speed of measurement – the only limiting factor is the processing time of the computer and time needed to switch from the nitrogen laser source to the white light source, leading to near-real time measurement of the tissue due to high excitation irradiance and collection efficiency from being in contact with the tissue.

The use of optical spectroscopy has been shown to successfully delineate different brain tissue types both in vitro and in vivo.2-4 The Lin et al in vitro study used 127 sites from 20 patients, ranging from healthy gray and white matter to primary and secondary tumors. Out of the two excitation lasers used for the tissue samples, the 337 nm nitrogen laser showed the highest intensity of fluorescent spectra for all brain tissue types, each with a single maximum peak at 460 nm. Likewise, the diffuse reflectance of the various tissue classes demonstrated a maximum emission peak at 625 nm (Figure 3). This is an ideal wavelength to measure due to the optical properties of blood, allowing differentiation from areas with high blood flow. In comparing these two readings, an algorithm was developed: for primary tumors, graphing F460 vs. Rd(625) when applying the threshold provided 96% sensitivity and 97% specificity. The secondary tumor algorithm consisted of graphing F460 vs. the ratio of F460:Rd(460); these values showed 95% sensitivity and 90% specificity. The two-step algorithm is not based on primary vs. secondary. It first separates all samples – even tumor – based on whether they derive from white or gray matter using diffuse reflectance information. It then uses different second steps for the white matter samples (b) and for the gray matter samples (a) using combinations of F and Rd information.

Similar results were seen in the Lin et al in vivo study. The same methods used in vitro were clinically tested on 26 patients during surgery, with a sample of the tested areas removed for pathological analysis. The fluorescence and diffuse reflectance spectra showed similar patterns from the in vivo tissue as with the in vitro tissue, yet differences were remarked. The tumor tissue showed higher fluorescence at 460 nm in vivo and was virtually indistinguishable from grey matter in the diffuse reflectance spectra (Figure 4). A two step algorithm was used for both primary and secondary tissues to provide an overall sensitivity of 89% and specificity of 76%. Although lower than the previous study, it was noted that five out of five areas deemed cancerous by the optical probe and not by CT and MR spectroscopy were identified pathologically as tumors or tumor margins. Because of the success of the in vivo experiments, a much larger study is being conducted in three cities across the country, with over 200 patients already logged.

The main limitation for probe-based spectroscopy is the size of the area investigated. The maximum tissue diameter the probe can interrogate is roughly 1 mm, which simply allows for point measurements. Since proof of principle has been established by the two aforementioned studies, the next step for this process is to evaluate a larger area of the brain surface via imaging; doing so will allow neurosurgeons to collect data from a significantly larger suspect area than with the point source and, subsequently, allow for intraoperative visualization of the tumor and its margins. This will allow for a clear-cut demarcation for tumor excision without need for pathological testing of excised tissue. Ideally, tumor tissue and healthy tissue would be assigned different colors via false-color mapping of diffuse reflectance spectral differences to delineate between the two. While the Gaser probe data output could be analyzed into varying colors for the neurosurgeon to interpret, the size of the sample is still small compared to the overall size of the tumor. A point reading signifying tumor cannot determine how large the tumor is or where the reading was taken in relation to the tumor. Thus, the ideal system would image large areas of the brain (like MRI), but be used in a real-time setting during surgery (like optical spectroscopy).

With these system requirements in mind, the design goals and criteria for a prototype system can be established. First, the image needs to be produced in real time. Waiting 15-20 minutes for pathology to tell the neurosurgeon whether to continue or stop resection is expensive and a waste of operating room resources. Second, the system must accurately distinguish between healthy and tumor tissue. The studies conducted by Lin et al showed the higher accuracy of optical spectroscopy in determining tissue type over MRI scans; 5 sites deemed as cancerous or as a tumor margin by the Gaser probe (and confirmed by pathology) were missed in the pre-operative MRI and classified as normal Third, the imaged area needs to precisely overlay the area of interest – if there is a shift in the image overlay by even a millimeter, the surgeon could remove healthy tissue regardless of how clearly demarcated the tumor margins are by the system. Finally, since many neurosurgeons are trained to employ operating microscopes to aid in tumor resection, the imaging system needs to interface with an operating microscope to seamlessly integrate into the clinical setting of the OR.

Theory

The basis of our system lies within Hadamard transforms. They allow multiplexing; specifically, measuring n unknowns with n measurements. Each measurement takes information in groups, rather than selecting for one unknown at a time – this reduces the error for each unknown and thus increasing the overall SNR to n. Applying this concept to light results in Hadamard transform spectroscopy, which is used for spectral imaging. HTS uses different “masks” or configurations to multiplex the light depending upon the weighting scheme by blocking, transmitting, or reflecting light in different directions. This creates different weighted pictures of an object that can be summed together to form an image.5 Another type of multiplexing, Fourier transforms, uses interference for the same purpose, but has two great disadvantages: 1) only 50% of the light can be used at any given time12, and 2) the inverse Fourier transform multiplies and divides the different pictures created, which is more computationally intensive than the addition and subtraction employed by Hadamard multiplexing. The difference in quality between Hadamard and Fourier transform imaging can be seen in Figure 6.

Each different configuration of the Hadamard mask can be expressed as a matrix of 1s and -1s; ones representing collected light and -1s representing light to apply the inverse Hadamard transform – thus the HT uses 100% of the incoming light. The S-matrix, a specialized form of Hadamard matrices, uses only 1s and 0s and therefore only passes 50% of the incoming light and ignores the other half. The number of configurations of the matrix depends on the number of unknowns (in our case, the number of pixels of the image). For these n unknowns, an improvement in signal-to-noise ratio (SNR) to √n can be accomplished instead of n.5

Digital micromirror devices (DMDs) are a recent innovation that may be utilized in applying a Hadamard transform (Figure 7). A DMD is a panel of tiny mirrors ranging from 13 – 17 µm, each of which can be set at different angles to bounce light in two directions – this element allows the application of the Hadamard transform in spectral imaging.13 The light is composed of 3 dimensions (x, y, and λ). The DMD applies the different masks and assigns each mirror as a pixel. When the incoming light hits it, the DMD will reflect in two directions (collected and saved light). The weighting of 1s and -1s of both directions allows for the accuracy of the image. While other labs have used this methodology, they mostly multiplex the collected light in the λ dimension and adjust the DMD to collect n wavelengths.14 For our design, the light will be multiplexed spatially rather than spectrally; the collected leg of light will be compressed such that all wavelengths can be captured from one line, rather than one wavelength over an area.

Design

Our design of a Hadamard-transform based spectral imaging system builds upon preliminary work done by Steven Gebhart, a biomedical engineering graduate student at Vanderbilt, and Heemesh Seth, an undergraduate biomedical engineering student who graduated in 2004. Our brainstorming sessions in September 2004 began by reviewing the previous research done on the system and subsequently determining the most logical system design using theoretical calculations. The system is intended to image brain tissue in three dimensions (3D)—two spatial dimensions (referenced as x and y) and one spectral dimension (λ)—using a two-dimensional (2D) CCD camera. The collection of 3D data with a 2D detector is accomplished by application of a Hadamard matrix (n = 512) to the collected image through a computer-controlled digital micro-mirror device (DMD). The Hadamard matrix provides a “map” to multiplex and reconstruct one of the spatial dimensions (e.g. x), allowing us to encode this spatial dimension within a series of two-dimensional images with dimensions of y and λ. Therefore, the information at each pixel within a frame of CCD data possesses a unique y-position, wavelength, and weighted sum of x-positions. Using the Hadamard matrix in conjunction with the DMD also provides a higher signal-to-noise ratio (SNR) when compared to other spectral imaging systems.

Preliminary Design

We divided our initial system design into three sections: a collection stage (Stage 1) where light from the sample is collected, collimated, and then compressed to fit within the 10mm square area of the DMD; a compression stage (Stage 2) where the transformed image from the DMD is compressed in one dimension; and a dispersion stage (Stage 3) where a grating disperses the spectral components of the line image across a CCD camera. Each of these stages was considered independently to simplify the design process. Dividing the system allowed us to test each stage individually before inserting it into the main system. In addition, each stage had its own design criteria; therefore, taking the overall design in a modular fashion allowed for greater focus on those specifications.

The requirements for Stage 1 are three-fold. First, the initial image collection must occur 8-10 inches (20.32-25.4cm) from the tissue source in order to make implementation in the operating room feasible. Second, the output of Stage 1 must fit within the 10x10-mm square DMD area (DMD area based on a 45º DMD angle with respect to the image path), requiring a demagnification of 0.4 from the original target sample field of view. Finally, the light coming from this stage should be collimated. It is important to note that this design is built upon the assumption that the light entering the system is collimated. A camera lens with a 28-80mm adjustable focal length and a maximum aperture of 25mm was supplied as a collection lens, adding additional constraints to our system design.

A two-lens design using the camera lens as the first lens was decided upon. The resulting magnification from such a two lens systems is a function of the ratio of the focal lengths of the two lenses used:

[pic] Equation 1

In order to collimate the light at the output of the second lens, the distance between the lenses must equal the sum of the focal lengths of the two lenses used. In theory, this creates a collimated output image provided the input is collimated. The second lens must also be an achromatic lens to eliminate chromatic aberrations that would induce distortion in our output. Using the constraints on focal length from the camera lens, along with a list of focal lengths for commonly available achromatic lenses, we calculated a two-lens system which met our design goals for Stage 1 (for complete calculations, see Appendix I). The preliminary design for Stage 1 consisted of the camera lens (f = 75mm, 25mm aperture) coupled with an achromatic doublet lens (f = 30mm, 25mm diameter), with the camera lens positioned 25.4cm (10in.) from the sample (Figure 8).

The image output from the achromatic doublet lens of Stage 1 is then passed to the DMD. As we assumed our light would be collimated, the distance to the DMD should not be an issue; the image should remain 10x10-mm square regardless of distance. However, no light is truly collimated and the system should take up as little space as possible. Therefore, our design placed the center of the DMD 25.4cm (10in.) from the end of Stage 1. As mentioned previously, the DMD in this design is at a 45º angle relative to the path of the incoming light, thus the reflected image from mirrors set at a 0º angle would be traveling orthogonal to the original light path (see Figure 9). It is at this point that the Hadamard matrix is applied by rapidly transitioning the micromirrors of the DMD (~20μs per transition) using a computer program designed by Steve Gebhart. Full implementation of the Hadamard matrix would require collection of light from both the 0º and 12º mirrors (representing 1 and -1 in the matrix), but as this is only a prototype system an S-matrix was deemed adequate. After application of the Hadamard transform, the light from the 12º mirrors (the zeros of the S-matrix) was blocked while the light from the 0º mirrors was passed to our Stage 2 compression system.

The design criteria for Stage 2 were the most challenging to implement in this optical system. Stage 2 must take the light reflected from the DMD and compress the vertical spatial dimension (y), creating a collimated 160(mx10mm line that is passed on to Stage 3. The 160(m width of the line corresponds to 10 pixel widths (16(m square) on the CCD camera used to collect the system output. There is a tradeoff between the summation of intensities to an infinitesimally narrow line as prescribed by Hadamard multiplexing and the spectral resolution of the system; this relationship will be discussed in greater detail in the design of Stage 3. The 160µm width was chosen in order to minimize dispersion of information within the multiplexed dimension while providing adequate spectral resolution. The system must also take up as little space as possible, so the path length of the light must be minimized. Instead of spherical optics, cylindrical optics were employed to compress one dimension of the image without affecting the other. Again, a coupled lens system was used to achieve compression and recollimation of the light. The same coupling strategy from our Stage 1 design can be applied in this case, with the distance between the two lenses being equal to the sum of their focal lengths and the magnification determined by Equation 1. Calculations performed by Tasha Nalywajko (see Appendix I) indicated that three cascaded coupled-lens systems would be able to compress the 10mmx10mm square image into the needed 160(m by 10mm line. However, after discussion with Dr. Anita Mahadevan-Jansen, this design was determined to be unacceptable due to loss of light. For each lens, specular reflectance from each surface of the lens results in a 4% loss of light. Extrapolating this for the 6 lenses (12 surfaces) needed for Stage 2, at most 61% of the light reflected from the DMD would be transmitted to Stage 3. Replacing the cylindrical lenses with mirrors, which have a >95% reflectivity, solved the problem of light loss from the lens surfaces, and since mirrors are available with much shorter focal lengths the six coupled lenses were supplanted by a single coupled lens-mirror system (Figure 10). A 500mm focal length cylindrical lens coupled to a 6.48mm focal length cylindrical mirror produced a magnification of 0.013, effectively compressing the 10x10mm collimated image to 130µm x 10mm (~8 pixel widths) to be passed to Stage 3.

The light is then passed to Stage 3 to be dispersed spectrally and collected. To accomplish this, the light in the vertical dimension (y) must be separated by wavelength in such a manner that the 400-800nm wavelength range falls within the 8.2-mm width of the CCD array. Also, the image must be compressed in the horizontal dimension (x) from 10mm wide to 8.2mm wide in order to be collected by the CCD camera with no loss of light. Spectral dispersion is accomplished through use of a dispersion grating, which disperses different wavelengths of incident light at different angles depending upon the density of the grooves on the grating. The governing equation for the dispersion is:

[pic] Equation 2

where θ is the angle between the incident light and the plane normal to the dispersion grating, α is the dispersion angle unique to the wavelength of interest (also measured from the grating normal), m is the order number (assumed to be 1 for this case), λ is the wavelength of the light dispersed, and a is the number of grooves per millimeter on the dispersion grating. Melissa Walden calculated that a holographic dispersion grating of 1800 grooves/mm would allow us to collect 400-800nm light (for complete calculations, see Appendix I) provided the incoming light is 45º from the normal of the dispersion grating. Spectral resolution of the output from the dispersion grating is dependent upon the order number and number of grooves used to disperse the light; the greater the number of grooves used, the higher the resolution:

[pic] Equation 3

Dispersing the image in the 130µm dimension with a 1800 grooves/mm grating should yield a spectral resolution of approximately 2.6nm at the middle wavelength of 600nm, which is adequate. The compression requirement was satisfied by insertion of another cylindrical lens (500mm focal length) at a distance of 25mm from the dispersion grating along the center of the dispersion cone to compress the light in the dimension not affected by the Stage 2 compression system. The dispersed and compressed light is finally collected 90mm beyond the cylindrical lens, as this was the point at which a 10mm line would be compressed to 8.2mm by the lens (Figure 11). After 512 samples are collected (corresponding to each row in the 512 order Hadamard matrix), the output of the CCD camera is fed into a computer. The raw data contains a series of 2D images (x, spectrum) with the third dimension (y) encoded in the intensity of each pixel in the CCD array. Software written by Steve Gebhart applies the inverse Hadamard transform to the collected samples to extract this second spatial dimension, providing us with the three-dimensional data necessary for differentiating normal and tumor tissue.

Preliminary Build

The parts for this prototype were purchased from Edmund Industrial Optics, Lambda Research Optics, and Newport Corporation with money provided by the NIH R01 grant CA085989. Lab space for the build was provided in the Biomedical Optics Lab of Vanderbilt University. Assembly of the system began by mounting a white light source and determining the optimal position for components in Stage 1. A white light source coupled to a liquid light guide was initially used for input into our system, and a flat with a 25mm x 25mm aperture was constructed to simulate our optimal field of view. The camera lens was mounted with two three-screw ring mounts to maximize stability, and the 30mm focal length achromatic doublet was positioned 105mm from the back end of the camera lens with a standard lens mount and holder. The camera lens and achromatic doublet were both mounted using a rail-and-slider system to allow for precise, empirical distance alignment of the lenses. Figure 12 shows the initial setup of Stage 1 without the optical flat.

When this system was tested with the square optical flat inserted, a circular output was obtained at the DMD. This was puzzling, as the input was square, so different source/camera lens distances were attempted. All outputs were circular. We initially concluded that the diffuse nature of our light source was to blame and transitioned to a 5mW red HeNe laser. By expanding and recollimating the beam to encompass the 25mm x 25mm square, we obtained a square output from the optical flat. However, our output from Stage 1 was still circular. Finally, it was concluded that the camera lens could not completely pass a 25mm x 25mm square accurately due to its limited of aperture. Instead, a circular field of view with a diameter of 25mm was the input of our system, theoretically producing a circular output with a diameter of 10mm at the DMD. Using basic trigonometry, a square with a maximum width and height of 7.07mm can be transcribed within this circle, meaning we would not be utilizing the entire DMD.

A second troubling point to the initial testing of Stage 1 was the size of the output to the DMD. The diameter of the circular output from our initial test was 17.42mm, over 174% of the expected 10mm size. By taking two measurements of the diameter of the output separated by a distance of 50mm (two holes on the optical table), we discovered that the output of Stage 1 was not perfectly collimated and calculated the divergence angle to be approximately 17º. A suggestion for the cause of this divergence was put forth by Melissa Walden: the exact location of the “lens” in the camera lens is unknown, as the camera lens is actually a multi-element composite lens. Working from this theory, we adjusted the distance from the camera lens to the achromatic doublet to attempt to find the correct coupling distance. The minimum divergence angle we could obtain was 14º with the doublet positioned 101mm from the end of the camera lens. This indicated to us that the light entering the system was not collimated, which became a potentially serious problem for our design. Transitioning to the HeNe laser solved the collimation problem but at the expense of simulating a real imaging scenario, which was unacceptable. It was suggested that we might be able to collimate the diffuse light by placing the sample at the lens focus, and a 75-mm achromatic doublet could be used with the original 30-mm doublet to implement Stage 1 using light collimated by the camera lens. However, we were unable to achieve diffuse-light collimation in test trials.

Final Design

Our assumption of collimated input was finally discarded, requiring a redesign of the system to take into account a sample emitting diffuse light. Rather than attempting to keep the light collimated throughout the system—a task which proved to be impossible—it was instead decided that the best course of action was to assemble the system such that the sample image was in focus at the end of each stage of the system. We returned to our initial white light source, but preliminary testing using a sample image (a solid black rectangle on a white background) showed that there was too much stray light interference from the diffuse white light source. A light box was constructed to solve this problem. The box completely segregated the white light source from the remainder of the system, and the internal surface of the box was blacked out to eliminate internally reflected light. An aperture with a 25mm diameter was opened in the side of the box and fitted with a lens mount and holder containing a 100mm focal length lens to focus the light onto the sample.

As a Hadamard matrix with an order of 512 was to be used, and as the mirrors of the DMD are each 13mm x 13mm square, the focused image on the DMD needed to be within an area approximately 6.7mm square in order to minimize light loss. Due to the indistinct “lens” of the camera lens, no theoretical calculation for Stage 1 was possible using this design; the correct focal lengths of the camera lens and the second lens of Stage 1, along with the distance between the two lenses, had to be empirically determined. With the camera lens set to a focal length of 28mm at a distance of 203.2mm (8in.) from the sample and a 50mm plano-convex lens positioned 110mm from the end of the camera lens, we obtained a focused image 5.06mm in diameter at a distance of 215mm from the second lens. We placed the DMD at the position of focus, ensuring that the mirrors were multiplexing specific spatial positions within the sample. Though the circular nature of our field of view prohibits use of the entire 512 x 512 matrix, the areas not used within the matrix will have zero intensity and therefore will not affect the multiplexing strategy.

When attempting to use the reflected light from the DMD in our initial Stage 2 design, we discovered a myriad of problems. First, the initial collimated light theory was no longer the governing paradigm for our system, and the lens-mirror pair was not suitable for compressing the diverging light. The 500mm focal length lens used to collect this light in the initial design was not powerful enough to force the divergent light to converge, thus it was useless for compression of the light in this instance. Also, when the light was reflected from the cylindrical mirror at 45º it became skewed, with one side wider than the other. This was obviously unacceptable, as it created non-uniform compression of the image and led to a skewed output. A series of cylindrical lenses—the minimum able to accomplish compression, for reasons stated above—may make compression of the light in Stage 2 possible, but will require either a trial and error method like that used in Stage 1 or highly complex calculations.

Conclusions and Future Directions

The construction of a novel optical system is an intricate and often frustrating endeavor. Optical engineers must be able to think on their feet, and a large amount of empirical experimentation is involved. Theoretical calculations cannot predict all of the possible outcomes when a system is placed in a working environment, though they do provide a solid starting point for design given the correct initial conditions. This is apparent in the outcome of our system, as the final product was very different from our initial design.

Sample image delivery and the compression of this image for use with the DMD were accomplished within the specifications of the system. Calculation of the dispersion angle of the light from the DMD has also been completed, and the angle was found to be 2.34º (see Appendix I). However, due to the substantial amount of troubleshooting involved in determining the behavior of the light within the system, redesign of Stage 2 could not be addressed. Building from the current design, the creation of a compression system for use in Stage 2 is feasible within a reasonable timeframe. We suggest the use of a spectrograph in place of the preliminary Stage 3 design, as this will eliminate the need for the alignment of additional optical components and will make interfacing with the CCD camera much easier. A diffraction grating of 1800grooves/mm should be used (see Appendix I), and the Triax 180 spectrograph can be refit to use this grating. Using a spectrograph adds an additional constraint to the Stage 2 design; the angle of convergence for the Stage 2 output must be smaller than the numerical aperture of the spectrograph (7.2º for the Triax 180).

The completion of this system is a worthwhile endeavor, as the evidence points to a need for a system of this type to increase the efficacy of brain tumor resection. The system is both economically feasible to construct and potentially lucrative. The most significant impact of this system will be an improvement in the quality of life and longevity for those suffering from brain tumors. This is certainly a goal worth working towards.

References:

[1] National Cancer Institute, “Adult Brain Tumor Treatment”,

[2] Marcu et al. “Fluorescence Lifetime Spectroscopy of Glioblastoma Multiforme”,

J. Photochemistry and Photobiology 80: 98-103, 2004.

[3] Lin et al. “Brain tumor demarcation using optical spectroscopy; an in vitro study”, J. Biomed Optics 5: 214-220, 2000.

[4] Lin et al. “In vivo brain tumor demarcation using optical spectroscopy”, J Photochemistry and Photobiology 73: 396-402, 2001.

[5] Harwit and Sloane, Hadamard Transform Optics, Ch. 1 & 3, New York: Academic Press, 1979.

[6] “ABC2 Brain Cancer Statistics.”,

[7] Bucci MK et al., “Near complete surgical resection predicts a favorable outcome in pediatric patients with nonbrainstem, malignant gliomas…” Cancer 101(4):817-24, 2004.

[8] Lacroix, Michel et al., “A multivariate analysis of 416 patients with glioblastoma multiforme: prognosis, extent of resection, and survival”, J Neurosurg 95: 190-198, 2001.

[9] Jaing TH et al., “Multivariate analysis of clinical prognostic factors in children with intracranial ependymomas” J Neurooncol. 68(3):255-61, 2004.

[10] “Decision analysis to Estimate Cost Effectiveness”,



[11] “Anatomy of the Brain”,



[12] Hanley et al. “Spectral imaging in a programmable array microscope by Hadamard transform fluorescence spectroscopy”, Applied Spectroscopy 53: 1-10, 1999.

[13] Laser Focus World



[14] Hanley et al. “Three-dimensional spectral imaging by Hadamard transform microscopy in a programmable array microscope”, J Microscopy 197: 5-14, 2000.

-----------------------

[pic]

Figure 1: Comparison of resection percentages with survival time of low-grade severity brain cancer patients8

[pic]

Table 1: Comparison of tumor resection cost with and without intraoperative imaging techniques

[pic]

Figure 2: Schematic of Gaser probe3

[pic]

Figure 3: Fluorescence spectra (left) and diffuse reflectance spectra (right) of various healthy and abnormal brain tissue in vitro3

[pic]

[pic]

Figure 4: Scatter plots of step 2 of two step algorithm of in vivo tissue samples for samples with low reflectance (< 3000 c.u.) (a) and high reflectance (>2500 c.u.) (b). The dashed lines represent the discrimination lines used to separate tumor margin and healthy tissue.

[pic]Figure 6: Comparison of Fourier transform imaging (left) and Hadamard transform imaging (right) in a satellite

image of an island

[pic]

Figure 7: Close up view of digital micro mirror device set at different angles

[pic]

Figure 9: Model of DMD incorporated into the system

[pic]

Figure 8: Stage 1 preliminary design

[pic]

Figure 10: Overhead view of Stage 2 preliminary design

[pic]

Figure 11: Stage 3 preliminary design

[pic]

Figure 12: Preliminary Stage 1 setup

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download