PHS 398 (Rev. 9/04), Continuation Page



B.1. VOXEL-BASED MORPHOMETRY (VBM)

Voxel-based morphometry (VBM) is a fully automated image analysis technique allowing identification of regional differences in gray matter (GM) and white matter (WM) between groups of subjects without a prior ROI. VBM as implemented in the statistical parametric mapping (SPM) software (Wellcome Department of Cognitive Neurology, London, UK, ) starts with normalizing each structural MRI to the standard SPM template and segmenting it into white and gray matter and cerebrospinal fluid (CSF) based on a Gaussian mixture model (Ashburner and Friston, 1997, 2000). Based on a prior probability of each voxel being the specific tissue type, an iterative Bayesian approach is used to get an improved estimate of the posterior probability. This probability is usually referred to as the gray/white matter probability density (Figure 1). In slightly different formulation, the tissue density can be generated by convolving the binary mask of the tissue with a Gaussian kernel (Paus et al., 1999). Afterwards the density maps are warped into a normalized space and compared across subjects.

VBM has been applied in cross-sectional studies on various anatomical studies: normal development (Paus et al., 1999; Good et al., 2001), autism (Chung et al., 2004), depression (Pizzagalli, et al., 2004), epilepsy (McMillan et al., 2004) and mild cognition impairment (MCI) (Johnson et al. 2004). A modified version of VBM has been also performed on the cortex in an AD study, where a fraction of gray matter within a ball of radius 15mm is taken as gray matter density (Thompson et al., 2003). Related to VBM is the method of regional analysis of volumes examined in normalized space (RAVENS) (Davatzikos et al., 2001; Goldszal et al., 1998). In the RAVENS framework, the segmentation is performed by maximizing a posterior probability jointly over tissue types and man intensities in a Bayesian framework (Goldszal et al., 1998).

B.2. DEFORMATION-BASED MORPHOMETRY (DBM)

Another very promising technique for non-ROI based morphometry is the deformation-based morphometry (DBM), which uses deformation fields obtained by nonlinear registration of brain images (Ashburner and Friston, 1998; Chung et al., 2001; Wang et al., 2003) (Figure 1). The objective of image registration is to warp the images such that homologous regions of different brains are moved closer together as possible for quantitative comparison. Either explicit parametric basis function methods (Ashburner et al., 1997; Woods et al., 1999) or nonparametric implicit multiscale approach (Collins et al. 1994) for the nonlinear registration can be used to represent the deformation fields. The widely available automated image registration (AIR) package () uses polynomial basis functions in expressing the deformation fields. If [pic] is the deformation vector field from a subject brain to a template, for example, the 3rd order polynomial warping function given by the AIR is given by

[pic]

One the other hand, SPM package (Ashburner et al., 1997) uses cosine basis functions in expressing deformation fields. The variations in the coefficients of the basis function expansion can be used to quantify the brain shape variation. Miller et al. (1997) and Wang et al. (2003) used the eigenfunction expansion of deformation fields in a group comparison. Instead of using deformation fields, it is convenient to use a relative change called the displacement vector field, which is defined as [pic] (Figure 2). Then Hotelling’s [pic]statistic for the displacement field were used in detecting morphological changes (Thompson et al., 1997; Bookstein, 1997; Joshi, 1998; Collins et al., 1998; Gaser et al., 1999; Cao and Worsley, 1999; Ashburner and Friston 2000; Chung et al., 2001). In a slightly different formulation, Wang et al. (2003) used Hotelling’s [pic]statistic of the coefficients of the basis function expansion in differentiating hippocampal shape difference between a group with MCI (n=18) and normal controls (n=26).

B.3. TENSOR-BASED MORPHOMETRY (TBM)

B.3.a. TBM: As an extension of the DVM, the higher order spatial derivatives of the deformation are used to construct morphological tensor maps such as local volume shrinkage/expansion and torsion (Davatzikos et al., 1996; Thompson et al., 2000; Chung et al., 2001; Chung et al., 2003a). From these tensor maps, 3D statistical parametric maps (SPM) are created to quantify the variations in higher order changes of deformation fields. In TBM, the Jacobian determinant J of the deformation field is used to detect volumetric changes (Figure 2). Since the Jacobian determinant measures the volume of the deformed unit-cube after image registration, the rate of the change of the Jacobian determinant over time, i.e.[pic], is the rate of the local volume change. In brain imaging, a voxel can be considered the unit-cube; therefore, [pic]essentially measures the longitudinal change in the volume of each voxel (Chung et al., 2001). The objective of TBM is to compare regional difference in absolute tissue volume while that of VBM is to compare regional difference in relative tissue concentration. The tissue volume difference is directly computed from the deformation field. Therefore, for the accurate volume measurement in TBM, it is crucial to have a very good image registration. On the other hand, it is not necessary for image registration used in VBM to match every cortical feature exactly, but merely corrects for global brain shape differences. If the image registration was exact, all the segmented images would appear identical and no statistically significant differences would be detected. VBM tries to detect differences in the local concentration of gray matter at a local scale while removing global shape differences (Ashburner and Friston, 2000). For these reasons, TBM is considered to be at one end of the methodology spectrum with fine image registration while VBM is at the other end with coarse image registration (Ashburner and Friston, 2001).

B.3.b. Continuum from TBM to VBM: Because of nonlinear image registration, the volume of certain parts of brain may shrink while others may enlarge. In order to preserve the volume of a particular tissue within a voxel, the tissue density is modulated by multiplying the Jacobian determinant of the deformation (Ashburner and Friston, 2000; Good et al., 2001; Ashburner and Friston, 2001; Keller et al., 2004). This formulation known as the Jacobian modulation has an effect of bridging the gap between two extreme morphometries: VBM and TBM. The Jacobian modulated VBM can be considered as one analysis framework out of continuous methodology spectrum between TBM and VBM. It is not even obvious if this is the most optimal morphometric framework among all methodology spectrums although the motivation for multiplying the Jacobian determinant is intuitive. The modulated VBM has been a contentious issue and there have been on going discussions on the optimal amount of image registration and modulation in VBM (Bookstein, 2001; Ashburner and Friston, 2001; Mehta et al., 2003; Crum et al., 2003); however, no one has yet to address this problem in a systematic way. At this moment, most researchers are performing the standard/modulated VBM under the default setting of SPM2. Good et al (2001) and Keller et al (2004) compared the results obtained from the standard VBM and the modulated VBM between clinical populations where the ground truth is unknown. Their approach is a simple eyeball comparison of the final statistical parametric maps without systematic quantification. To date no work has systematically compared results obtained using the standard/modulated VBM and TBM in a simulated study where the ground truth is known. In addition, there has been no quantitative study on determining the optimal parameters and the amount of registration for VBM. It is not even clear if the statistically significant regions resulting from VBM and TBM will match although it is assumed to be. To address all these unanswered questions, a different approach other than the Jacobian modulation is needed. We propose to develop a new morphometric framework that bridges cap between TBM and VBM in a multiscale fashion and validate it on well-constructed synthetic dataset where the ground truth is known (Aims 1 and 2).

B.4. SURFACE-BASED MORPHOMETRY (SBM)

As a part of TBM, surface specific morphometries have been developed (Dale and Fischl, 1999; Thompson et al., 2000; Chung et al., 2003a; Chung et al., 2003b). The cerebral cortex has the topology of a 2-dimensional convoluted sheet. Most of the features that distinguish these cortical regions can only be measured relative to that local orientation of the cortical surface (Dale and Fischl, 1999) (Figure 4). Unlike 3D whole brain volume based VBM, DBM and TBM, SBM has the advantage of providing a direct quantification of cortical morphology. It is likely that different clinical populations will exhibit different cortical surface geometry. By analyzing surface measures such as cortical thickness, curvatures, surface area, gray mater density and fractal dimension, brain shape differences can be quantified locally (Thompson et al., 1998; Thompson et al., 2003; Chung et al., 2003a; Chung et al., 2003b). Extending the framework of SBM further, a morphometric analysis can be performed on the surface of the brain substructures such as the hippocampus and amygdala (Wang et al., 2001; Shen et al., 2005).

B.4.a. Cortex extraction: The first step of SBM requires segmenting brain tissue into three classes: white, gray and CSF. From the segmented images, the gray/white and white/CSF interfaces are further extracted as triangular meshes via deformable surface algorithms (Davatzikos, 1995; Dale and Fischl, 1999; MacDonald et al., 2000). The gray/white matter interface is usually called the inner surface while the gray/CSF interface is called the outer surface. Partial voluming (Tohka et al., 2004) is a problem with binary tissue classifiers but topology constraints used in deformable surface algorithms were shown to provide some correction by incorporating many neuroanatomical a priori information (MacDonald et al., 2000). The triangular meshes are not constrained to lie on voxel boundaries. Instead, the triangular meshes can cut through a voxel, which can be considered as correcting where the true boundary ought to be. Once we have a triangular mesh as the realization of the cortical surface, cortical surface related anatomical measures can be computed.

B.4.b. Cortical measures: Cortical changes were found in AD and other dementias (Thompson et al., 1998, 2003; Studholme et al., 2001) so the cortex specific anatomical measures should be able to sufficiently quantify the cortical shape variability across different populations (Chung et al., 2003a). The following measures have been proposed:

• cortical thickness measuring the distance between two interfaces (MacDonald et al., 2000; Miller et al., 2000; Fischl and Dale, 2000; Jones et al., 2000).

• local surface area dilatation measuring the area growth/atrophy rate (Chung et al., 2003a).

• gray matter density as defined as a fraction of gray matter within a ball of radius 15mm (Thompson et al., 2003).

• local gray matter dilatation rate measuring the gray matter volume growth/atrophy rate (Chung et al., 2003a).

• surface curvatures (Chung et al., 2003b).

Further, the 3D volume of the brain substructures can be estimated using the divergence theorem of Gauss (Chung et al., 2001; Wang, et al., 2004). It estimates the 3D volume of the substructure using the normal vectors on 2D surface that bounds the substructure. This method was used to estimate the total volume of hippocampus in AD population (Wang, et al., 2004). However, normal vectors and surface curvatures are extremely sensitive to a noisy perturbation so it is difficult to estimate accurately in discrete triangular meshes (Yuen et al., 1999; Page et al., 2001). Our proposed SBM technique called thin-plate segmentation (TBS) will be able to address this problem (Aim 3).

Due to the ambiguity of defining the distance between two surfaces, various techniques for measuring the cortical thickness have been proposed: Minimum Euclidean distance method (Fischl and Dale, 2000), Laplace equation approach (Jones et al., 2000), Bayesian construction (Miller et al., 2000), Automatic linkage method (MacDonald et al., 2000). There are limited number studies that used the cortical thickness in normal or abnormal clinical populations (Fischl and Dale, 1996; MacDonald et al., 2000; Rosas et al., 2002; Chung et al., 2003a, Chung et al., 2005a). Although Lerch and Evans (2005) performed a comparative analysis on the precision of the different cortical thickness measures, they did not validate on the accuracy. There is almost no consensus about which cortical thickness measure is most sensitive or if they provide consistent statistical results. There is also an issue of image processing artifacts in most of cortical thickness computation softwares due to the discretization error in representing the cortex as a triangular mesh. In Figure 3, (b) and (c) show the histograms of thickness measures based on MacDonald et al. (2000) and the FreeSurfer software (Dale and Fischl, 1999) respectively. Image processing artifacts can be observed at the extreme ends of the histograms that make the distribution of the thickness measures non-Gaussian. Our study proposes to perform more rigorous comparative analysis on thickness measures and to develop a new cortical surface extraction method that will provide more accurate thickness estimation (Aim 3).

Surface normalization: In order to compare cortical measures such as the cortical thickness across subjects, a surface registration is necessary. Most surface registration techniques are based on mapping the surface to either plane or a unit sphere, which simplifies the surface warping problem in 3D to a 2D problem. Surface warping can be formulated as a minimization problem of a certain objective function that measures the discrepancy between two surfaces (Thompson and Toga, 1996; Davatzikos, 1997; Van Essen et al. 1998; Dale and Fischl, 1999; Fischl et al., 1999; Robbins, 2003; Liu et al. 2004; Thompson et al., 2004).

B.4.c. Cortical flattening: Cortical flattening is needed for visualization (Figure 6). The flat representation of the cortical surfaces allows researchers to visualize the final statistical parametric maps in hidden sulcal regions. The cortical surface is usually assumed to have the topology of a two-dimensional highly convoluted sheet without any holes or handles (Davatizkos et al., 1996). For some particular surfaces, there exists a homothetic map that preserves both relative distances and angles; however, in most cases, we can only achieve either a conformal mapping that preserves angles (Angenent et al., 1999; Haker et al., 2000; Gu et al., 2003), an isometric mapping that preserves distances (Schwartz et al., 1989), or an area-preserving mapping. It is not possible to preserve area and angle at the same time. The use of cortical flattening has been limited to the cortical visualization.

B.6. DATA ANALYSIS

B.6.a. General linear model (GLM): Good et al (2001) observed accelerated symmetric loss of grey matter in parietal lobes, pre- and post-central gyri, insula and aterior cingulate cortex for aging normal population. In addition, there is a steeper age-related gray matter volume in males (Cowell et al., 1994; Good et al., 2001). Therefore, it is necessary to remove the effect of age in a statistical analysis involving aging population. A general linear model (GLM) is mainly used to test for the effect of explanatory variables on a collection of such images (Chung et al., 2004a). The GLM is a very flexible framework that can be used in localizing the regions of anatomical difference between groups while removing the effect of covariates such as age, gender, total brain volume and handness. The analysis of variance (ANOVA), the multivariate analysis of variance (MANOVA), the analysis of covariance (ANCOVA) and the multivariate analysis of covariance (MANCOVA) can be all viewed as special cases of the GLM. (Timm and Mieczkowski, 1997). The GLM is used in longitudinal VBM analysis of brain tissue volume expansion and atrophy (Paus et al., 1999; Good et al., 2001) and a two-sample group comparison (Chung et al., 2004a; Chung et al., 2005a). The GLM framework is also used in SBM quantifying cortical shape difference in longitudinal data (Chung et al., 2003a). So this general statistical framework would be useful in quantifying, for instance, which subjects with the mild cognition impairment (MCI) will convert to AD and statistically compare the rate of the atrophy difference between MCI and AD groups.

B.6.b. Synthetic image simulation: Many methodologies for simulating MRIs are about simulating intensity variations with fixed anatomy (Cocosco et al., 1997; Collins, 1998). The BrainWeb image simulation software has three parameters for simulating T1 weighted MRI. It can change slice thickness, amount of noise and intensity non-uniformity () (cocosco et al., 1997; Collins, 1998). It has been used to mainly validate image segmentation techniques since it comes with true gray matter segmentation (Xu et al., 1999). Goldszal et al (1998) modeled image intensity as a collection of regions with slowly varying intensity B-spline functions with white Gaussian noise. Spatial smoothness of the each tissue type was obtained by a discrete Markov random field modeling. These types of image intensity simulation cannot control for shape variations that is crucial for validating morphometric techniques. Davatzikos et al. (2001) manually simulated 30% uniform atrophy in two predefined gyri in 11 normal elderly subject MRIs generating additional set of data. Then they performed a two-sample t-test comparing the real and the synthetic samples to validate the RAVENS method. In order to perform comparative analysis on different morphometric techniques, it is necessary to simulate shape variations of structural MRIs. Our research proposes to develop a systematic methodology and software for simulating shape variations and intensity variations at the same time (Aim 1).

B.6.c. Surface-based smoothing: In SBM, the segmentation, thickness computation and surface registration procedures are expected to introduce noise in the thickness measure. Compared to standard 3D whole brain volume based analysis, better sensitivity in cortical surface specific regions can be obtained in SBM due to the reduction of partial volume effects and constrained surface data smoothing (Andrade et al., 2001; Chung et al., 2003a). In order to increase the signal-to-noise ratio (SNR) and the sensitivity of statistical analysis with respect to the cortical geometry, cortical surface based data smoothing is necessary (Andrade et al. 2001; Chung et al., 2003a; Lerch et al., 2003; Thompson, 2004). Lerch and Evans (2005) showed that a surface-based smoothing substantially increased the sensitivity of cortical thickness analysis. Gaussian kernel smoothing is widely used in 3D whole brain images to smooth out data. The Gaussian kernel weights an observation according to its Euclidean distance. When the observations lie on a convoluted brain surface, however, it is more natural to assign the weight based on the geodesic distance along the surface. On the curved manifold, a straight line between two points is not the shortest distance so one may incorrectly assign smaller weights to closer observations (Figure 4). Therefore, smoothing data residing on manifolds requires constructing a kernel that is isotropic along the geodesic curves on the surface. One such approach called anatomically informed basis function method (Kiebel and Friston, 2002) tries to construct an anisotropic Gaussian kernel that spatially adapt its shape in different anatomical regions in such a way that it effectively smoothes functional MRI data along the cortical sheet. Alternately, diffusion smoothing has been developed for smoothing data along the cortex (Andrade et al. 2001; Chung et al., 2003; Cachia et al., 2003). The technique of diffusion smoothing relies on the fact that the Gaussian kernel smoothing in Euclidean space is a solution to an isotropic diffusion equation (heat equation). Instead of directly applying kernel smoothing, the same result can be obtained by solving a diffusion equation (Andrade et al. 2001; Chung et al., 2003; Cachia et al., 2003). Therefore, diffusion smoothing generalizes Gaussian kernel smoothing to an arbitrary curved manifold. The drawback of the diffusion smoothing approach is the complexity of setting up a finite element method (FEM) and making the numerical scheme stable (Chung and Taylor, 2004). We propose to develop a simpler surface data smoothing technique called heat kernel smoothing that avoids the numerical instability (Aim 3).

B.6.d. Multiple comparisons: If a statistical test is performed at each voxel, the total number of tests would be more than hundred thousands in the whole brain. So in order to account for the enormous number of false positives, multiple comparison correction is needed. There are three major approach for controlling for multiple comparisons: the random field theory (Worsley, 1994; Worsley et al., 1996), the false discovery rates (FDR) (Benjamini and Hochberg, 1995; Genovese et al., 2002) and permutation tests (Nichols and Holmes, 2002). In random field theory, the p-value of the maximum of a test statistic is computed. The resulting image is called the statistical parametric map (SPM) (Friston et al., 1995; Worsley, 1996; Friston, 2002) and has become the de facto tool in reporting statistical analysis results. Figure 2 illustrates SPMs in both longitudinal DBM and TBM analysis results (Chung et al., 2001). Thompson et al. (2003) used nonparametric permutation approach in assessing statistical significance for a SBM study in AD. Recently Benjamini and Hochberg (1995) have proposed a completely different approach. Instead of controlling the probability of ever reporting a false positive as in the case of the random field theory or the permutation test, FDR proposes to control the probability of false positives amongst discoveries (Worsley, 2003). Researchers are using three different multiple comparison methods in reporting their findings but the SPMs generated from these techniques do not match. So it is difficult to compare findings across studies. We propose to develop a statistical computing software that enable us to compare across studies that use different multiple comparison correction and p-values (Aim 4).

B.7. SIGNIFICANCE OF PROPOSED STUDIES

Although we are beginning to see non-ROI based morphometries applied to AD studies (Wang et al., 2003; Janke et al., 2001; Thompson et al., 2003, 2001; Lerch et al., 2005; Shen et al., 2005a; Frisoni et al., 2002; Good et al. 2002; Testa et al., 2004), all these studies are using different morphometric techniques making cross comparison among studies difficult to perform. As far as we are aware, there is no systematic study that cross validated different morphometries in a well characterized data set. We propose to develop a unified morphometric analysis framework that enables us to generate a well characterized synthetic data set and to cross-validate different VWM. Although the proposed study design is not limited to a specific population, it will be optimized to aging population in general and MCI and AD population in particular.

The proposed studies will compare three major techniques VBM, DBM and TBM in parallel in multiscale fashion and evaluate the performance and the statistical sensitivity in both the ADNI database and a well constructed synthetic data set where the ground truth is known. The number of basis function serves as a scale in a multiscale framework. At each scale, we will compare morphometric techniques. This enables us to cross validate morphometries from the one end of the methodology spectrum to the other in a continuous fashion. Although previous studies proposed an idea of combining VBM and TBM via Jacobian modulation (Ashburner and Friston, 2000; Bookstein, 2001; Ashburner and Friston, 2001; Goods et al., 2001; Crum et al., 2003), it is not clear if it is the most optimal morphometry among all methodology spectrum. Further there has been no systematic comparative analysis between VBM and TBM in terms of performance and statistical power. We will test if the proposed absolute volume estimation from VBM is compatible to that of TBM among other things. We will identify the optimal scale and parameters for VBM.

In SBM, we propose a new segmentation technique called the thin-plate spline (TPS) based segmentation that gives a continuous representation of the cortical surface. Unlike the discrete triangular mesh representation of deformable surface algorithms (Davatzikos and Bryan, 1995; Fischl and Dale, 2000; MacDonald et al., 2000), our continuous representation should provide better estimates for cortical thickness, surface curvatures and other cortical measures that are used in quantifying cortical shape variations. Our cortical thickness estimation using TPS will be validated against other methods such as Fischl and Dale (2000), Jones et al. (2000), Miller et al. (2000) and MacDonald et al. (2000) using a synthetic data set where the ground truth is known. There is no study that quantitatively compared different cortical thickness measures except Lerch and Evans (2005). Hence, our proposed study will offer new insights into the interpretation of the different thickness measures. We will identify cortical thickness measure that is most sensitive to anatomical change in AD.

In order to increase the detection power in SBM further, a new surface-based smoothing method called heat kernel and geodesic kernel smoothing will be developed. Our proposed smoothing method avoids most shortcomings associated with the previously developed diffusion smoothing technique (Andrade et al. 2001; Chung et al., 2003a; Cachia et al., 2003; Chung and Taylor, 2004). Diffusion smoothing approach is somewhat complex to set up and it is hard to make the numerical scheme stable (Cathia et al., 2003; Chung et al., 2003b).

The proposed study design and new image processing techniques are not restricted to a specific population so our approaches should be applicable in many different imaging studies.

C. PRELIMINARY STUDIES AND PROGRESS REPORT

Dr. Chung has conducted a series of studies on the methodological development of morphometries and applied them in normal development (Chung et al., 2001; Chung et al. 2003), melancholic depression (Pizzagalli, et al., 2004), autism (Chung et al., 2004, 2005a, 2005b) and MCI (Shen et al., 2005a, 2005b). Dr. Chung has research experience with all branch of VWM techniques: VBM (Chung et al., 2004; Pizzagalli, et al., 2004), DBM (Chung et al., 2001), TBM (Chung et al., 2001; Chung et al., 2003a; Chung et al., 2003b) and SBM (Chung et al., 2005a, 2005d; Xie et al., 2005b; Chung et al., 2003a; Chung et al., 2003b; Hoffmann et al., 2004; Chung et al., 2005a; Shen et al., 2005).

C.1. VBM, DBM and TBM

C.1.a. VBM: Although VBM was originally developed for whole 3D morphometry, Dr. Chung has applied the VBM framework to 2D midsagittal cross sectional corpus callosum to show that the ROI-based Witelson partition (Witelson, 1989) can be avoided in characterizing the anatomy of autistic subjects (Chung et al., 2004). Dr. Chung has shown that the total tissue volume of region ( can be estimated directly from the gray matter probability density f:[pic], where [pic]is the Gaussian kernel with bandwidth [pic](Chung et al., 2004). This formulation can be used to extend the VBM to global atrophy analysis frameworks (Freeborough and Fox, 1997; Smith et al., 2002). Dr. Chung used this idea to estimate the total corpus callosum cross-sectional area from the white matter density (Chung et al., 2004). The idea of using VBM to perform a global volumetry is new and has not been validated in a well constructed synthetic data set. Because of the wide availability of the SPM software and its easy of use, VBM has been the most widely used VWM in recent years. If we can extend the VBM to a global volumetry, it will provide a new tool for researchers.

C.1.b. DBM: Dr. Chung has developed a unified statistical framework for measuring longitudinal changes in DBM and TBM (Chung et al., 2001; Chung et al., 2003a). The unification comes from a single model for structural change, rather than two separate models, one for displacement in DBM and one for Jacobian determinant in TBM. The unification comes from the following simple stochastic model on the displacement U:

[pic], where L is a partial differential operator that models the growth and atrophy of brain tissue and ( is the covariance matrix which allows correlation between the components of displacement vector U. Dr. Chung has also introduced the rate of the Jacobian determinant change to measure brain tissue growth or atrophy change over time (Chung et al., 2001). This modeling framework can be directly applicable in longitudinal data analysis in AD.

C.1.c. TBM: The statistical distribution of the Jacobian determinant can be derived from the above model unifying independently treated DBM and TBM together (Chung et al., 2001). Although there are many different ways of detecting morphological changes in DBM and TBM, the tensor maps of a translation, a rotation, and a strain were shown to be sufficient for detecting a relatively small displacement and, in turn, for characterizing morphological changes over time. From these tensor maps, 3D statistical parametric maps (SPM) are created for a group of subjects to quantify the variations in length, area, volume and surface curvature and to visualize these variations in the 3D whole brain volume; however, it was shown that the most important tensor map, corresponding to the brain tissue growth and atrophy, is the Jacobian determinant (Chung et al., 2001). In AD progression, the rate of brain atrophy is changing and the rate is significant higher shortly before the onset of dementia (Fox et al., 1999). So the volume dilatation rate, which is the first order approximation of the rate of the Jacobian determinant change (Chung et al., 2001) would be a useful index in characterizing AD progression. Dr. Chung proposed to extend the voxel-wise TBM by measuring the global volume of region ( by integrating the Jacobian determinant over the corresponding region [pic] in a template (Chung et al., 2001):[pic]. By comparing the global volume measurements obtained from VBM and TBM, we can directly compare VBM and TBM together. This idea has not been investigated before. We propose to perform this comparison (Aim 2)..

C.2. SURFACE-BASED MORPHOMETRY (SBM)

As a subset of TBM, Dr. Chung has developed the surface-based morphometry (SBM) that modeled the deformation of brain, and key morphological metrics such as length, local area, volume dilatations and curvature change in differential geometric framework (Chung et al., 2003a; Chung et al., 2003b; Chung et al., 2005). The deformation of the cortical surface was modeled stochastically as the boundary of multi-component fluids in tensor geometry (Drew, 1991) and age related neuroanatomical changes of the cortical surfaces were computed (Chung et al., 2003a; Chung et al., 2003b). Using this method, it was possible to localize the regions of gray matter atrophy, cortical surface reduction, cortical thinning and surface bending in normal population (Chung et al., 2002) (Figure 7). Because the technique is based on coordinate-invariant tensor geometry, artificial surface flattening (Andrade et al., 2001; Angenent et al., 1999; Van Essen et al., 1998), which distorts the inherent geometrical structures of the cortical surface, has been avoided. However, statistical parametric maps can be projected onto a sphere or more a plane for better visualization. Dr. Chung proposed to estimate the volume of gray matter ROI using the divergence theorem of Gauss in Chung et al. (2001). It estimates the 3D volume of the ROI using the geometric information such as the normal vectors on 2D surface that bounds the ROI. The idea was independently used to estimate the total volume of hippocampus in AD population (Wang, et al., 2004). However, normal vectors as well as surface curvatures are extremely sensitive to noisy perturbation so it is difficult to estimate accurately in discrete triangular meshes. Our proposed thin-plate spline segmentation gives a differentiable surface that avoids this problem (Aim 3).

C.2.a. Curvature/metric tensor estimation: Dr. Chung estimated cortical curvature and metric tensors using local polynomial fitting on triangular meshes (Chung et al., 2003a; Chung et al., 2003b). The sum of the principal curvatures was used as an index for cortical bending (Figure 6). It was shown to be a more stable curvature index than either the mean or the Gaussian curvatures. The principal curvatures were used to segment sulci and gyri automatically but further research and validation need to be done (Chung et al., 2003a).

C.3. STATISTICAL METHODS

A unified statistical framework based on random fields has been developed for both the 3D whole brain volume and the 2D cortical surface by Dr. Chung (Chung, 2001; Chung et al., 2001; Chung et al., 2003a). The components of the deformation fields were modeled as Gaussian random fields with a certain covariance structure that needs to be estimated. An alternate approach is to model the components of the deformation fields via an orthogonal basis function expansion called Karhunen Loeve expansion (Chung, 2001; Muller, 2005). The advantage of this expansion is that the choice of basis functions makes the coefficients of the basis functions uncorrelated making subsequent statistical analysis much easier to handle. These techniques are well suited for quantifying and characterizing tissue atrophy change without the specification of the regions of interest.

C.3.a. General linear models (GLM): Dr. Chung has performed the between subject and between group comparison statistically while removing the effect of covariates such as age, gender, total brain volume and handness by using a GLM. It was used for longitudinal analysis (Chung et al., 2001, 2003a, 2004a) and group comparison data (Chung et al., 2004a; Chung et al., 2005a). Figure 13 illustrates an additional detection power in VBM when the age effect was removed. GLM can be used in testing, for instance, if the rates of gray matter atrophy between subjects with MCI and subjects with AD are equal.

C.3.c. Heat kernel smoothing: The heat kernel smoothing is formulated as a series of iterated convolutions between data and a heat kernel. It can be proven that smoothing with a large bandwidth is equivalent to an iterative kernel smoothing with a smaller bandwidth (Chung et al., 2005a). The heat kernel is given in terms of the eigenfunctions of the Laplace-Beltrami operator. Further, it converges to a Gaussian kernel locally for a small bandwidth. Hence, the heat kernel smoothing with a large bandwidth can be achieved by iteratively applying heat kernel smoothing with a smaller bandwidth. Heat kernel smoothing was applied to the cortical thickness measures in SBM to increase the signal-to-noise ratio with relatively large FWHM of 30mm (Chung et al., 2005a). Extending the heat kernel smoothing, we have developed geodesic kernel smoothing that uses the less number of iterations and a larger bandwidth. We first compute the geodesic distance on the cortex by the dynamic programming (Bertsekas, 1987) and the heat kernel is computed by assigning weights according to the geodesic distance. Then the heat kernel is constructed by assigning weights as a function of the geodesic distance and normalizing the kernel. This approach avoids the necessity of choosing a small bandwidth so the smoothing can be performed with the less number of iterations with a larger bandwidth. In this formulation, heat kernel smoothing is a special case of the geodesic kernel smoothing (Chung and Tulaya, 2005). The property of geodesic kernel smoothing is still under investigation and we are currently trying to speed up the algorithm.

C.3.d. Partial correlation mapping: Since the initial submission of the proposal, we were able to streamline partial correlation mapping strategy further. Since the ADNI database contains many non-anatomical measures, this technique will be very useful in understating the relationship between anatomical measures and non-imaging based measures. Many previous anatomical studies neglect to account for age effect and the subsequent statistical parametric maps tend to report spurious results in between-group comparisons (Chung et al., 2004a; Chung et al., 2005b). Dr. Chung applied the partial correlation mapping idea to remove the effect of age and global cortical area difference effectively while localizing the regions of high correlation between anatomical measures and cognitive measures (Chung et al., 2005b). The partial correlation is a correlation that partials out other covariates such as gender, age and global brain size difference (Grunwald et al., 2001). Unlike the GLM approach, it provides a more meaningful correlation SPM that can give the direct visualization of relationship between cognitive measures and anatomy.

C.3.e. Multiple comparison: Dr. Chung has used the random field approach in VBM (Chung et al., 2004), DBM (Chung et al., 2001) and TBM (Chung et al., 2005; Chung et al., 2003a; Chung et al., 2003b) while Dr. Johnson has used the false discovery rate approach (Genovese et al., 2002) in VBM (McMillan et al., 2004; Johnson et al., 2004). In one of SBM studies, Dr. Chung was able to incorporate the amount of surface-based smoothing into the random field theory based corrected p-value formulation (Chung et al., 2003a; Chung et al., 2005a).

D. RESEARCH DESIGN AND METHODS

D.3. IMAGE PROCESSING

D.3.a. Intensity normalization: After acquiring T1 and T2 weighted MRIs from the ADNI database, intensity nonuniformity will be corrected using the algorithms in Sled et al. (1998). The algorithm in Sled et al. (1998) is implemented in N3 software package from the Montreal Neurological Institute. Since our aim is to validate VBM and TBM under the same image processing parameters, we will also use the intensity normalization routine in SPM2 (Ashburner and Friston, 1997).

D.3.b. Image segmentation: For whole brain volume segmentation, we will use both the Gaussian mixture modeling of SPM2 and our own thin-plate spline (TPS) based segmentation to segment three different tissue types: gray matter, white matter and CSF. In the TPS method, the probability of one voxel belonging to a particular tissue type is computed by evaluating the volume that is bounded by the TPS boundary and a voxel boundary (Figure 18) (Xie et al., 2005a). It should provide a better estimate than SPM2 tissue probability. Brain substructures such as the corpus callosum and hippocampus boundary will be segmented using our level set toolbox (Figure 9) (Hoffmann, 2003). We will validate our methodologies on a synthetic dataset.

D.3.c. 3D volume registration: For image registration, we will use SPM (Ashbuner and Friston, 1997) and AIR packages (Woods et al., 1998). Since they provide the explicit basis function expansion of deformation fields, they reduce an additional computational burden (Collins and Evans, 1999). A tissue specific registration (gray matter to gray matter) will be also used to get more accurate registration of the gray matter. For longitudinal data, image registration will be performed within a subject and later mapped to a template.

D.3.d. Templates: An age specitifc templates reduce registration errors when one does a statistical analysis whithin that particular age group. In order to sentitize the subsequent statistical analysis, it is necessary to construct a new template rather than to use publically available templets. We will construct a template brain by choosing the coefficients of the basis function expansion in the deformation fields such that it solves a certain regularization problem. We will choose the coefficients that give the minimum sum of the squared total deformation (Davis et al., 2004). Our proposed template construction method provides a multiscale representation of the template from coarse to fine scale depending on the number of basis functions used. Since it is crucial to compare with VBM results generated from the default setting of SPM2, we will also use the default SPM2 template as well.

D.4. MULTISCALE APPROACH TO VOXEL-WISE MORPHOMETRIES.

D.4.b. Anatomical measures:

VBM: Tissue density maps (gray and white matter) will be computed. Total gray matter volume will be computed using formula [pic](Chung et al., 2004). For longitudinal data, tissue density dilatation rate over time will be computed. It is defined as a percentage metric change over time (Chung et al., 2001; Chung et al., 2003a). Hippocampus volume will be estimated similarly by integrating the tissue density map over the template hippocampus. The modulated tissue density will be also computed. DBM: Displacement vector fields, its length and its covariance matrix will be computed. The length dilatation rate will be computed for longitudinal data (Chung et al., 2001). TBM: The Jacobian determinant will be computed. The volume dilatation rate will be computed for longitudinal data (Chung et al., 2001). Total gray matter volume will be computed by integrating the Jacobian determinant:[pic] (Chung et al., 2003a). Hippocampus volume will be estimated by the same method.

ROI volumetry: The VWM will be validated against the ROI-based volumetry in hippocapal and amygdala regions (n(20 for each group). The hippocampus and amygdala will be manually segmented using the package SPAMALIZE (brainimaging.waisman.wisc.edu/~oakes/spam/spam_frames.htm) developed by Dr. Oakes. The manual segmentation will follow the procedures developed by Dr. Oakes (Rusch et al. 2001; Oakes et al., 1999). The validation will be bested on the t-statistic for paired measurements at 95% significance. We will also compare the sensitivity and specificity between VWM and the traditional ROI-based volumetry using the receiver-operating characteristics (ROC) curve (Testa et al., 2004). For other limbic system structures, we propose to use the deformable template approach (Miller et al., 1997; Chung et al., 2001). The statistical parametric maps will be superimposed on the template and the statistical significance of the anatomical change will be inferenced with respect to the position of the limbic system structures in the template.

D.5. NEW SURFACE-BASED MORPHOMETRY

D.5.c. Anatomical measures from TPS.

Cortical thickness: The minimum Euclidean-distance based thickness, as defined and validated in Fischl and Dale (2000) and Lerch and Evans (2005), will be used to compute the cortical thickness for TPS boundary. This is the thickness metric implemented in widely used FreeSurfer package. We expect our thickness estimation will provide a better estimate than the FreeSurfer based thickness estimate due to the smooth functional representation of the cortical boundary. We will validate the TPS-based thickness measure against the Laplace equation method (Jones et al., 2000), the minimum Euclidean distance (Fischl and Dale, 2000), the Bayesian construction (Miller et al., 2000) and the automatic linkage method (MacDonald et al., 2000). Since each thickness metric is based on different definition, validation will focus on statistical sensitivity of detecting shape difference between groups and over time using synthetic data (D.6.)

Total gray matter volume: Using the TPS cortical boundary, we will estimate the total gray matter volume that should be more accurate than simple voxel counting method due to the reduction of the partial volume effect. We propose two methods for computing the total gray matter volume. At each voxel, we will compute the gray matter volume bounded by the TPS boundary and the voxel boundary. Our method can compute the amount of gray matter within a voxel (Figure 8, 18). In the second method, the total gray matter volume is estimated by applying the divergence theorem of Gauss (Chung et al., 2001; Wang et al., 2003). It requires an accurate estimation of normal vectors. The normal vectors will be estimated by directly differentiating the TPS boundary analytically. The methods will be validated against BBSI (Freeborough and Fox, 1997; Barnes et al., 2004; Chen et al., 2004) and SIENA (Smith et al., 2002) on a longitudinal synthetic data set (n=10, 30, 50).

Hippocampal/amygdala surface: By taking the manual segmentation of the hippocampus/amygdala (D.4.c.) as an initial constrain, we will also segment the hippocampus/amygdala surfaces using the TPS method. Our method will be compared with ROI-volumetry and VWM.

D.5.e. Sulcal pattern analysis: A new multiscale morphometric analysis framework will be developed (Chung et al., 2005d). Surface curvature metrics such as the TPS potential energy (She et al., 2000) will be mapped onto a unit sphere using the spherical harmonic basis representation (Groemer, 1996) (Figure 20). By controlling for the number of basis function used, we can obtain the multiscale representation of the curvature metric. Then the curvature metric will be classified into two classes: sulci and gyri (Figure 4) using machine learning techniques. We can use a Gaussian mixture modeling (Ashburner and Friston, 2005), the support vector machine (SVM) and regression trees (Hastie et al., 2001). For the Gaussian mixture modeling, we currently have a modified EM algorithm implemented in MATLAB written by PhD student Shubing Wang. Sulcal/gyral maps will be warped to an average sulcal/gyral probability map using the surface normalization procedure. For SVM-based classification, we will use the SVM MATLAB code written by Dr. Gavin Cawley ( ~gcc/svm /toolbox/). The optimal scale (the number of spherical harmonics basis) will be automatically determined by model selection techniques (Hastie et al., 2001; Wang et al., 2001) (see D.4.d.). Our approach is free of time consuming ROI-based sulcal analysis (Ochiai, et al., 2004).

D.7. STATISICAL ANALYSIS

Based on an optimal morphometric framework and new SBM tools, we will perform data analysis on the ADNI database. The aim of ADNI data analysis is to develop a reasonable statistical methodology and sotfware to be used by other researchers who may lack advanced statistical skills. For this reason, almost all data analysis will be performed in MATLAB and all scripts, functions and toolbox will be available through the website.

D.7.a. Over all statistical approach: We will use the general linear model (GLM) approach (Timm and Mieczkowski, 1997; Chung et al., 2004a; Chung et al., 2005a) and functional PCA (Lecoutre, 1990; Yao et al., 2003; Muller, 2005) as a basis for data analysis. We have written our own MATLAB tools that can perform GLM and functional PCA in various settings. These analysis frameworks were used in Chung et al. (2004a), Chung et al. (2005a) and Chung et al. (2005d). GLM is a very flexible framework encompassing almost all statistical model-fitting frameworks: ANOVA, MANOVA, ANCOVA, MANCOVA and logit models. Given a response variable[pic], nuisance variables[pic], and variables of interest (predictors)[pic], the basic GLM has the following form: [pic]or [pic] where [pic]is a link function that dictates the relationship between predictors and the expected response [pic]. The identity link function i.e. [pic] is mainly used for continuous response variable[pic]. Then we test if parameter vector[pic]. The parameter vectors are estimated via the least squares method and the significance is tested using the F random fields (Worsley et al., 1996). In general, increasing the number of predictor variables that are related to the response variable will increase the statistical power. Each categorical variable will be coded as an integer. For instance, variable [pic] will be coded as 0 for male and 1 for female while variable [pic] will be coded as normal=0, MCI=1 and AD=2. The following is the small subset of analyses we will be performing. It is possible to combine the following specific techniques together in the GLM framework.

Group comparison at baseline: Anatomical change is expected in all three groups over time. So it is crucial to remove the effect of age and other covariates such as gender and handness in group (Figure 13). We will analyze anatomical measures (gray matter density, white matter density, Jacobian determinant, modulated tissue density, cortical thickness, curvature, local surface area element) taken at time 0. The basic GLM model for basis line analysis for each group will be

[pic][pic].

Anatomical difference with respect to normal subjects will be detected by thresholding an F map that tests[pic]. Minimum n=20 from each group will be used.

Longitudinal analysis: Functional GLM (James and Silverman, 2005) and functional PCA (Lecoutre, 1990; Yao et al., 2003; Muller, 2005) will be used to set up a longitudinal model. The basic model is of the following form: [pic],where [pic] is a basis function expansion of the anatomical measure in variable [pic]. We can use either splines or polynomials for the basis function expansion or the functional PCA decomposition of [pic] (D.6.b.). For longitudinal modeling, due to the increased number of parameters to estimate, we will use at least n=40 from each group.

Combining multiple measures: We can also combine multiple anatomical measures into a single GLM framework of a cumulative logit model (So and Kuhfeld, 1995). Multiple measurements at each voxel will improve a GLM model fit and, in turn, it should improve the statistical power. We model categorical variable [pic] as a multinomial random variable. Then we set up a cumulative logistic model[pic], where predictors [pic] will contain anatomical measures from VBM, DBM and TBM. Testa et al. (2004) showed that the combination of VBM and ROI measures increase the accuracy of detecting hippocampal atrophy in AD. We expect a similar result for combining VBM, DBM and TBM. We will use the receiver-operating characteristics (ROC) curve in quantifying the detection power (Hanley and McNeil, 1982; Testa et al., 2004).

Correlation analysis: For correlating multiple anatomical measures to the multiple cogitative measures or the clinical statue such as the (4 statue, we will use the partial correlation mapping (PCM) strategy (Chung et al., 2005b) and its extension multiple partial correlation mapping. The PCM is formulated within a GLM framework (Neter et al., 1996). This is a very useful statistical parametric mapping strategy for visualizing correlation between anatomical measures and non-anatomy based measures (Chung et al., 2005b) (Figure 16).

D.7.b. Image smoothing for 3D whole brain/2D triangle mesh: To increase the signal-to-noise ratio and the smoothness of anatomical measures needed for the random field theory, spatial smoothing is necessary before any of statistical analysis. Smoothing has an effect of sensitizing the subsequent statistical analysis (Lerch and Evans., 2005). 3D Gaussian kernel smoothing will be used for 3D whole brain volume based morphometries. For 2D triangular-mesh surfaces, heat kernel smoothing and its extension geodesic-kernel smoothing will be used to smooth cortical data (Chung et al, 2005a; Chung and Tulaya, 2005c) (Figure 14,15). The procedure for performing geodesic-kernel smoothing is as follows. Compute the geodesic distance between two points p and q by the dynamic programming (Bertsekas, 1987). The dynamic programming finds the geodesic distance by minimizing over all possible path connecting p and q. Construct geodesic kernel[pic]. Perform iterated convolutions with the geodesic kernel. The implementation detail can be found in Chung et al. (2005a). The MATLAB code for heat kernel smoothing is available at . The geodesic kernel smoothing is still under development. At this moment, heat kernel smoothing is done with 200 iterations. With the use of geodesic kernel smoothing, it is expected that the number of iterations can be substantially reduced by the factor of 10 significantly booting the computational speed.

Since the cortical thickness and the curvature metrics obtained from the TPS segmentation is already smooth enough, no further surface-based smoothing will be performed.

D.7.c. Statistical parametric mapping: The result of the statistical analysis will be presented using the statistical parametric mapping technique (Chung et al., 2001, 2003a). This also serves as a tool for spatially localizing the regions of anatomical change.

Correction for multiple comparisons: For DBM specific statistical analysis for two sample comparison, Hotelling’s [pic]statistic for the displacement field has been used (Chung et al., 2001), but for more complex experimental designs, we will use Roy’s maximum root that is equivalent to the maximum canonical correlation (Worsley et al., 2005). In other generic GLM setting, we will use multiple comparisons based on F random fields. We will perform a cross validation study on the performance of three multiple comparison methods: FDR (Benjamini and Hochberg, 1995; Genovese et al., 2002), permutation tests (Nichols and Holmes, 2002) and the random fields theory (Worsley, 1994; Worsley et al., 1996) on both synthetic data and the ADNI data set. It will be performed on a two sample comparison setting with varying sample sizes (n=5, 10,15,20,30, m=5, 10, 15, 20, 30). There is no study that compared them on anatomical measures so we will test if the permutation method offers substantial improvement over the random field method for low smoothness and low degrees of freedom. As a part of cross comparison, we will develop a software in MATLAB and C that takes p-value, sample sizes and the method for multiple comparison as inputs and produces corresponding p-values in different multiple comparison methods. It will enable researchers to cross compare statistical parametric maps across studies that use different multiple comparison methods and p-values.

Determining ROI for additional analysis: The constructed SPM can serve as a basis for selecting ROI for further analysis. The SPM will be thresholded and the regions that are above or below the threshold are determined to be the regions of anatomical difference with a given statistical significance level, i.e. 95% or 99% (Figure 2,5,6,7,13). For instance, our preliminary VBM result on MCI (C.1.a.) shows the multiple clusters with the tissue atrophy (Figure 5). These clusters serve as ROIs for a further analysis. Then by correlating anatomical measures in the ROIs with additional non-imaging based measures like (4 statue, we can identify the most important regions that characterize the process of AD. This has an effect of increasing statistical sensitivity by reducing the number of multiple comparisons. This is the approach used by Dr. Johnson on a MCI study (Trivedi et al., 2005), where VBM is used to identify the mesial temporal lobe (including the hippocampus and entorhinal cortex), medical temporal lobe, medical parietal and posterior cingulate, and lateral parietal and temporal lobes as the ROI for the further correlation analysis with non-imaging based measures. We also expect these regions to be the most important regions for the further analysis.

D.7.d. Best model selection/classification: Due to the tremendous amount of data in the ADNI database, it is desirable to construct a best model of AD progression in an automatic fashion. The number of available models increases exponentially as the number of variables increases. Among all available GLMs that include higher order terms and interactions, we will determine a model that will best predict the anatomical change by non anatomical measures by minimizing the Akaike information criterion (AIC) (Hastie et al., 2001; Wang et al., 2001). We will use R software (Dalgaard, 2002; ) to perform the best model selection. This software will automatically select variables that best predict anatomical changes by the stepwise regression. If this approach is found to be useful, we will implement it in MATLAB with GUI that will be used in conjunction with SPM2 software.

-----------------------

[pic]

Figure 20. TPS potential energy (Shi et al., 2000) of cortex projected onto the unit sphere for better visualization. (a) Spherical harmonic basis multiscale representation (b) The corresponding spherical harmonic basis (c, d) Heat kernel smoothing based multiscale representation.

[pic]

Figure 10. Multiscale representation of the cortex.

[pic]

Figure 16. Partial correlation between face recognition task and cortical thickness in normal control subjects removing the effect of age and global cortical area difference. Asymmetric pattern of correlation is more enhanced in the occipital region.

[pic]

Figure 8. TPS segmentation method. Unlike deformable surface algorithms that use triangular meshes, the smoothness of tissue boundaries are automatically guaranteed. It should provide a better estimation of Riemannian metric tensors and the cortical thickness measurements for SBM as well as correcting partial volume effect.

[pic]

Figure 6. Curvature analysis strategy: (a) sum of the principal curvatures on the inner cortical surface (b) sulcal pattern segmentation (c) sulcal pattern mapping onto a sphere (d) F-statistic map on a template surface identifying the regions of increasing complexity over time in normal subjects (Chung et al, 2003a; Chung et al., 2003b).

[pic]

Figure 2. Statistically significant regions of gray matter volume increase (red), atrophy (blue) and deformation (yellow) over time in normal population (Chung et. al., 2001). The growth/atrophy rates were computed using the Jacobian determinant of TBM. The arrows show the deformation vector fields in DBM showing the direction of tissue growth.

[pic]

Figure 13. (a) SPM before removing the effect of age. (b) SPM after removing the effect of age.

[pic]

Figure 7. Statistically significant cortical area reduction (left) and gray matter atrophy over time (right) in a normal population (Chung et al., 2003a).

[pic]

Figure 14. Diffusion smoothing increases SNR in cortical thickness measurements. A cortical thickness map is projected onto a rectangle.

[pic]

Figure 15. Noisy cortical thickness of a subject smoothed via newly developed iterative heat kernel smoothing with 20, 100 and 200 iterations. Without the use of data smoothing on cortex, the sensitivity of statistical analysis will decrease.

[pic]

Figure 1. Top: gray and white matter segmentation. Bottom: tissue probability density maps used in VBM.

[pic]

Figure 5. Modulated VBM analysis exhibiting regions where MCIs have less gray matter than controls p ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery

Related searches