University of Southern California



Project Narrative

a. Significance of Research

The Southern California Earthquake Center (SCEC) conducts a broad program of earthquake system science that seeks to develop a predictive understanding of earthquake processes with a practical mission aimed at providing society with improved understanding of seismic hazards. The SCEC Community Model Environment (SCEC/CME) research collaboration develops physics-based computational models of earthquake processes, and performs large-scale earthquake system science research (Jordan et al., 2003) calculations. The SCEC/CME earthquake system science computational research integrates new scientific advances and new technical capabilities into broad impact seismic hazard data products. We develop physics-based predictive models of earthquake rupture and ground motion processes and improve existing seismic hazard computational methods by integrating new, computationally intensive, methods.

The SCEC/CME research collaboration involves seismologists, computer scientists, structural geologists, and engineers working to transform seismic hazard analysis into a physics-based science through HPC implementations of computational pathways (Figure 1). Each of these computational pathways represents a research activity that can contribute to improved earthquake ground motion forecasts. The pathways are interdependent, and we are iteratively improving our capabilities in each area, and then integrating changes together into improved system models.

In this allocation request, SCEC researchers request INCITE computer resources to pursue a two-year earthquake system science research program that will improve the physics-based models implemented in our codes, improve the 3D earth models used as inputs to ground motion simulations, and improve the software efficiency of our research codes to utilize the capabilities of INCITE resource architectures. We will integrate resulting scientific and software improvements into our ensemble-based physics-based probabilistic seismic hazard analysis (PSHA) calculations. Our research plan will lead to improved physics-based scenario ground motion simulations and improved physics-based probabilistic seismic hazard analysis (PSHA) used for long-term ground motion forecasts.

Improved Understanding of Seismic Hazard and Seismic Risk. The human and economic exposure (loss potential) in seismically active regions continues to increase with the incessant growth of urban areas and their dependence on interconnected infrastructure networks. Our proposed research project will use INCITE computing resources to advance and improve the accuracy of earthquake simulations as a means to better understand seismic hazard and assess seismic risk.

Physics of Earthquake Processes. Physics-based earthquake simulation uses numerical methods and high-performance computer applications to reproduce earthquake ground shaking through the explicit solution of the physics involved in a seismic fault’s rupture process and the resulting propagation of seismic waves. This deterministic approach to earthquake simulation offers a robust alternative to legacy procedures that are based on physical concepts but rely on non-deterministic methods. Our proposed INCITE research will help lead the scientific transition from stochastic and hybrid computational ground motion modeling to physics-based deterministic ground motion modeling.

Previous ground motion modeling efforts within SCEC, including the TeraShake (Olsen et al., 2006), the ShakeOut (Graves et al., 2008; Olsen et al., 2009; Bielak et al., 2010), and the M8 (Cui et al., 2010) scenario-earthquake simulations have been predominantly limited to low maximum frequencies (less than or equal to 1 Hz), and relatively high values of minimum shear wave velocity (greater than or equal to 500 m/s). In contrast, recent efforts by SCEC scientists have shown that it is now computationally tractable to produce forward wave propagation simulations at a much higher level of fidelity. Olsen and Mayhew (2010), for instance, produced a series of simulations of the Mw 5.4 2008 Chino Hills, California, earthquake up to 1.6 Hz, and Withers et al. (2010) and Taborda and Bielak (2013, 2014) produced simulations of the same event in a region of 180 km x 135 km using maximum frequencies of 2 and 4 Hz, respectively, with a minimum shear wave velocity of 200 m/s. Results from these simulations achieved reasonable levels of agreement when compared with recorded seismograms from over 300 strong-motion stations.

The quality of these high-frequency ground motion simulations, however, tends to deviate from observations as the frequency increases, especially above 1 Hz. A detailed analysis of the possible causes for the discrepancies at high frequencies makes evident our limited description of the source and material models, which lack the level of resolution carried by the simulation (Withers et al., 2010, Taborda and Bielak, 2013, 2014). However, we observe that there are small areas where the match between the simulated and recorded seismograms is still satisfactory, even at frequencies above 2 Hz. Our work suggests that improvements in the modeling of the physics of the source and wave propagation, together with a more accurate definition of the crustal seismic velocity and geotechnical models, and the continued development of our simulation tools, will lead us to accurately reproduce earthquake effects at higher frequencies.

Using High Frequency Ground Motion Simulations To Improve PSHA. We will use our current ground motion modeling simulation capabilities, including deterministic, stochastic, and hybrid approaches, as points of departure for improving current simulation methods and developing new modeling approaches in order to better reproduce the ground response at higher frequencies, and thus improve PSHA production runs and end-user results. SCEC/CME researchers are incorporating high-frequency characteristics in the source representation and the structural heterogeneity of the seismic velocity models by considering aspects such as geometrical complexity of the faults, random distributions of slip, rupture velocity, and rise time, and the variability of the material properties. This effort will be tied to other activities led by SCEC/CME scientists such as the Broadband Platform, where historic earthquakes in Southern California and Japan are being used as reference events to produce high-frequency ground motion simulations that will be compared with observed data.

Our research program will focus on improving ground motion modeling in three general areas: (1) Develop more accurate 3D anelastic earth-structure models (often referred to as velocity models), (2) Develop advanced source and wave propagation simulation methods, and (3) Extend the physics-based CyberShake PSHA to frequencies above 1Hz. In terms of geoscience and engineering, with these improvements, we seek to have an impact in the following three research areas.

Three Dimensional Velocity and Attenuation Models. Accurate 3D wave propagation simulations require accurate 3D representations of the earth’s crustal structure and sedimentary deposits. Models containing this information are called 3D velocity models. Several such models exist for California, though their accuracy is limited to very low frequency simulations ( 2 Hz), but also at low frequencies. We are currently pursuing various alternatives to incorporate fine-scale irregularities into material models based on statistical characteristics observed in the raw data extracted from well logs. To aid this effort, we have implemented algorithms that allow us to generate large-scale heterogeneous 3D models using SCEC’s Unified Community Velocity Model (UCVM) software. We will use UCVM to generate models that incorporate heterogeneities and test these models through high frequency simulations that will be compared to observed data. We will use SCEC’s Broadband Platform tools to validate simulation results and to calibrate these perturbations for use in realistic earthquake simulation.

The quality of ground motion simulations at higher frequencies also depends more strongly on the attenuation structure of the medium than previous, lower-frequency, simulations. In past simulations, SCEC researchers have used different (viscoelastic) models to represent the internal friction of the material (e.g., Day, 1998; Graves and Day, 2003; Bielak et al., 2011). These models are typically expressed in terms of the inverse quality factor, Q-1, defined at a certain reference frequency, fo. Measurements of Q-1 in California and elsewhere show that Q-1 is roughly constant below about 1 Hz, but decreases rapidly at higher frequencies (Aki, 1980). The value of Q-1at fo has usually been expressed as functionals of the local S- and P-wave velocities using empirical rules (e.g., Olsen et al., 2003; Brocher, 2008). The attenuation structure of the upper crust, however, is highly heterogeneous and poorly known. Furthermore, recent investigations (Wang & Jordan, 2014) show that we will need to develop attenuation models that are also depth-dependent. We will investigate these issues and improve existing or develop new attenuation models using simulations with various Q models, independently or in association with the seismic velocity models (CVMs), including adjustments of the attenuation structure when the small-scale heterogeneities are incorporated (Withers et al., 2013).

Earthquake Source Models and Dynamic Rupture. High-frequency simulations will require new source models capable of reproducing high-resolution realistic rupture processes on the fault. This entails addressing aspects such as the roughness of the fault due to geometrical point-by-point variations in strike, dip and rake angles, and the spatial heterogeneity in the distributions of slip, rupture velocity, and rise time. There are several ongoing efforts within SCEC to tackle this problem. For example, the rupture generator by Graves and Pitarka (2010) combines a low-wavenumber deterministic description of the source with a high-wavenumber stochastic component to account for the randomness in the rupture characteristics. While this method has not been studied extensively for source signals above 1 Hz, we will examine whether realistic source characteristics may be generated at higher frequencies. Steve Day’s group at SDSU and Ralph Archuleta’s group at UCSB are using dynamic rupture simulations to investigate high-frequency waves generated by irregular rupture propagation due to the geometrical complexity of the faults (Shi and Day, 2013). According to this, slip on non-planar faults generates stress perturbations that can either accelerate or decelerate the rupture as it propagates. This process releases bursts of seismic waves with wavelengths comparable to the size of the non-planar features—which will, in turn, generate high-frequency waves. We will integrate 10-Hz dynamic rupture simulations with 10-Hz wave propagation simulations to investigate the effects of the source model on the quality of simulations at high frequencies.

Higher Frequency CyberShake PSHA Models. As part of our INCITE project, we will leverage improvements in velocity and source models to perform a CyberShake PSHA hazard model calculation at frequencies above 0.5Hz. As a probabilistic ensemble calculation, CyberShake simulations are performed at lower frequencies than our highest resolution validation simulations. Ground motion and seismic hazard groups have requested that the SCEC/CME group produce 2-Hz physics-based California PSHA Model. This calculation will push our current capabilities to new limits. Our INCITE research plan targets the use of DOE resources to help achieve this goal by conducting individual high-frequency simulations that can help us advance our simulation capabilities, which will later be integrated into the CyberShake PSHA models.

Seismic hazard analysis (SHA) is the scientific basis for many engineering and social applications: performance-based design, seismic retrofitting, resilience engineering, insurance-rate setting, disaster preparation, emergency response, and public education. All of these applications require PSHA to express the deep uncertainties in the prediction of future seismic shaking (Baker and Cornell, 2005). Earthquake forecasting models comprise two types of uncertainty: an aleatory variability that describes the randomness of the earthquake system (Budnitz et al., 1997), and an epistemic uncertainty that characterizes our lack of knowledge about the system. According to this distinction, epistemic uncertainty can be reduced by increasing relevant knowledge, whereas the aleatory variability is intrinsic to the system representation and is therefore irreducible within that representation (Budnitz et al., 1997; Goldstein, 2013).

[pic]

As currently applied in the United States and other countries, PSHA is largely empirical, based on parametric representations of fault rupture rates and strong ground motions that are adjusted to fit to the available data. The data are often very limited, especially for earthquakes of large magnitude M. In California, for instance, no major earthquake (M > 7) has occurred on the San Andreas fault during the post-1906 instrumental era. Consequently, the forecasting uncertainty of current PSHA models, such as the U.S. National Seismic Hazard Mapping Project (NSHMP), is very high (Petersen et al., 2008; Stein et al., 2012). Reducing this uncertainty has become the “holy grail” of PSHA (Strasser et al., 2009).

In physics-based PSHA, we can reduce the overall uncertainty through an iterative modeling process that involves two sequential steps: (1) we first introduce new model components that translate aleatory variability into epistemic uncertainty, and (2) we then assimilate new data into these representations that reduce the epistemic uncertainty.

As an example, consider the effects of three-dimensional (3D) geological heterogeneities in Earth’s crust, which scatter seismic wavefields and cause local amplifications in strong ground motions that can exceed an order of magnitude. In empirical PSHA (Pathway 0 of Figure 1), an ergodic assumption is made that treats most of this variability as aleatory (Anderson & Brune, 1999). In physics-based PSHA, crustal structure is represented by a 3D seismic velocity model—a community velocity model (CVM) in SCEC lingo—and the ground motions are modeled for specified earthquake sources using an anelastic wave propagation (AWP) code, which accounts for 3D effects (Pathway 2 of Figure 1). As new earthquakes are recorded and other data are gathered (e.g., from oil exploration), the CVM is modified to fit the observed seismic waveforms (via Pathway 4 of Figure 1), which reduces the epistemic uncertainties in the Pathway 2 calculations. Figure 2 shows a comparison between a CyberShake hazard map calculated using a 1D velocity model and an equivalent CyberShake hazard map calculated using a 3D velocity model.

The proposed research will improve the physical representations of earthquake processes and the deterministic codes for simulating earthquakes, which will benefit earthquake system science worldwide. California comprises about three-quarters of the nation’s long-term earthquake risk, and Southern California accounts for nearly one half (FEMA, 2000). The proposed project will support SCEC’s efforts to translate basic research into practical products for reducing risk and improving community resilience in California and elsewhere. Our proposed work can also help reduce the epistemic uncertainty in physics-based PSHA by increasing the scales of deterministic earthquake simulations. Recently preformed variance-decomposition analysis demonstrates that improvements to simulation-based hazard models such as CyberShake can, in principle, reduce the residual variance by a factor of two relative to current NSHMP standards (Wang and Jordan, 2014). The consequent decrease in mean exceedance probabilities could be up to an order of magnitude at high hazard levels. Realizing this gain in forecasting probability would have a broad impact on the prioritization and economic costs of risk-reduction strategies (Strasser et al., 2009).

b. Research Objectives and Milestones

We define our INCITE research objectives in Table 1 as a series of computational advances in one or more of these three research areas described above. To accomplish these objectives, we will need to expand the spatial and temporal scales of physics-based PSHA, advancing and improving each of the software elements used for these calculations.

Table 1: SCEC INCITE Research Project Objectives

|O1 |Improve the resolution of dynamic rupture simulations by an order of magnitude and investigate the effects of realistic friction |

| |laws, geologic heterogeneity, and near-fault stress states on seismic radiation. |

|O2 |Extend deterministic simulations of strong ground motions to 10 Hz for investigating the upper frequency limit of deterministic |

| |ground-motion prediction. |

|O3 |Compute physics-based Probabilistic Seismic Hazard Attenuation (PSHA) maps and validate those using seismic and paleo-seismic data.|

|O4 |Improve 3D earth structure models through full 3D tomography using observed seismicity and ambient noise. |

We organize our project activities around these four major objectives in terms of eight computational milestones. Table 2 defines these milestones, which serve as progress indicators because they represent the first time we run a single simulation with a specific improvement. Most research studies, however, will require multiple simulations. Our simulation plan, which is the basis for our INCITE resource request, describes how many times we expect to run a milestone calculation in a full research calculation.

Previous INCITE Research. SCEC researchers have been awarded several previous INCITE allocations; a two-year award in 2009, a one-year award in 2011, a Directors Discretionary award on Jaguar in 2011, an Early Science Program award on Mira in 2012, a two-year INCITE award in 2012, and our current one-year 2014 INCITE award called “High Frequency Physics-Based Earthquake System Simulations.”

This year’s SCEC INCITE proposal builds on advances made on SCEC’s two most recent INCITE awards, our INCITE 2012-2013 CyberShake3.0 project and our current INCITE 2014 high frequency earthquake systems simulation (High-F). Both projects improved the tools and techniques used for earthquake physics modeling and ground motion simulations. The common fundamental technique used in both research efforts is 3D physics-based ground motion simulations. In our proposed research effort, we are working to improve the accuracy and efficiency of these simulations.

| |Year 1 Milestone Descriptions |Objective |

|M1 |Use full 3D tomography and comparative validations to improve existing California velocity models at 0.2 Hz |O2, O4 |

| |for use in high frequency wave propagation simulations. | |

|M2 |Run high frequency forward simulations using alternative material attenuation (Q) and seismic velocity |O2, O4 |

| |models (CVMs). Compare the impact of material properties, topography, and models including spatial | |

| |variability (heterogeneities) and soft-soil deposits (or geotechnical layers) on 4Hz+ simulations by | |

| |simulating forward events using alternative models and comparing results between synthetics and data. | |

|M3 |Run high frequency forward simulations using alternative approaches to include the effects of off-fault and |O2, O3 |

| |near-surface plastic deformation. Compare the impact of alternative plasticity models (linear-equivalent, | |

| |3D+1D hybrid, full 3D plastic) on 4Hz+ simulations by simulating forward events and comparing the results | |

| |among synthetics, empirical relationships and data. | |

|M4 |Calculate a 1.0Hz CyberShake Hazard curve. Use updated CVMs, source models, and codes to calculate a higher |O1, O2, O3, O4 |

| |frequency CyberShake hazard curve. | |

| |Year 2 Milestone Descriptions |Objective |

|M5 |Use full 3D tomography and comparative validations to improve existing California velocity models at 0.5Hz |O2, O4 |

| |for use in high frequency wave propagation simulations. | |

|M6 |Run high frequency forward simulations using alternative material attenuation (Q) and seismic velocity |O2, O4 |

| |models (CVMs). Compare the impact of material properties, topography, and models including spatial | |

| |variability (heterogeneities) and soft-soil deposits (or geotechnical layers) on 8Hz+ simulations by | |

| |simulating forward events using alternative velocity models and comparing the results. | |

|M7 |Run high frequency forward simulations using alternative approaches to include the effects of off-fault and |O2, O3 |

| |near-surface plastic deformation. Compare the impact of alternative plasticity models (linear-equivalent, | |

| |3D+1D hybrid, full 3D plastic) on 8Hz+ simulations by simulating forward events and comparing the results | |

| |among synthetics, empirical relationships and data. | |

|M8 |Calculate a 1.5Hz CyberShake Hazard curve. Use updated CVMs, source models, and codes to calculate a higher |O1, O2, O3, O4 |

| |frequency CyberShake hazard curve. | |

Table 2: SCEC INCITE Milestone Simulations

Our current proposal focuses on improving the accuracy and usability of ground motion simulations as we increase the maximum simulated frequency. A critical aspect of this research activity is to compare our simulations against observations from historical earthquakes using different goodness-of-fit metrics, including peak ground motions. Once individual earthquake simulations are well validated, researchers are prepared to use the simulation tools to study potential (scenario) earthquakes, which can also be compared to empirical ground motion prediction equations (GMPEs) used by the engineering community.

During SCEC’s existing INCITE allocation that spans from January 2014 to December 2014, the SCEC research group has used INCITE resources to develop computational models of system-level earthquake processes. By running these models on INCITE systems, we have derived more accurate estimates of the strong ground motion expected from future earthquakes in California. Our earthquake system approach to seismic hazard research has led to a series of significant computational achievements. We improved physics-based simulation software by developing codes that can model geologic complexities including topography, geologic discontinuities, and source complexities such as irregular, dipping, and offset faults (Shi et al 2013a, Shi et al 2013b). We increased the upper frequency limit of deterministic ground-motion predictions above 2 Hz and compared simulations with observed seismograms using goodness-of-fit measures of engineering relevance (Taborda et al, 2013/2014, Olsen et al, 2013). We improved the SCEC 3D community velocity models by iterative full-3D inversions of large suites of observed waveforms from the Southern California Seismic Network, including earthquake phases and large ambient-noise datasets (Lee et al, 2013). We improved the CyberShake hazard model using alternative velocity models, and validated the results existing seismic data and surveys of precariously balanced rocks in Southern California (Wang et al, 2013). We used recently-developed GPU-based earthquake wave propagation codes to conduct the first 10-Hz deterministic simulation, using dynamic rupture propagation along a rough fault embedded in a 3D velocity structure with small-scale heterogeneities described by a statistical model. We carried out simulations of dynamic ruptures using a support operator method, in which the assumed fault roughness followed a self-similar fractal distribution with wavelength scales spanning three orders of magnitude, from ~102 m to 105 m. We then used AWP to propagate the ground motions out to large distances from the fault in a characteristic 1D rock model with, and without, small-scale heterogeneities. The largest 0-10 Hz Titan run comprised 443 billion elements, with 6.8 TB velocity model and dynamic source inputs, and was executed using 16,640 OLCF Titan nodes, achieving sustained petaflops performance (Cui et al, 2010, Cui et al, 2013).

c. Computational Readiness

Our proposed INCITE research plan will make use of four primary scientific simulation codes: AWP-ODC, AWP-ODC-GPU, Hercules, and Hercules-GPU. These codes can be mapped to our three major work areas: velocity model evaluation and improvements (AWP-ODC, Hercules), earthquake source improvements (AWP-ODC), and wave propagation improvements and verification and validation of simulations (AWP-ODC, AWP-ODC-GPU, Hercules, Hercules-GPU). AWP-ODC is also used for CyberShake strain Green tensor (SGT) calculations. We propose to use two different core modeling schemes, namely, finite differences, in AWP-ODC (Cui et al., 2010) and AWP-ODC-GPU (Cui et al., 2013), and finite elements, in Hercules (Tu et al., 2006; Taborda et al., 2010), and Hercules-GPU. We describe each in greater detail in the following sections.

AWP-ODC is a finite difference (FD) anelastic wave propagation Fortran code originally developed by Kim Bak Olsen, later integrated by Steven Day for dynamic rupture components and Yifeng Cui for overall optimization for scalability on petascale computers (Cui et al., 2010). AWP-ODC solves the 3D velocity-stress wave equation explicitly by a staggered-grid FD method with fourth-order accuracy in space and second-order accuracy in time. The code was extensively validated for a wide range of problems, from simple point sources in a half-space to dipping extended faults in 3D crustal models.

AWP-ODC-GPU is a C and CUDA-based version of AWP-ODC. This code was re-structured from the original Fortran version code to maximize throughput. We converted the AWP-ODC Fortran/MPI code to a serial CPU program in C (Zhou et al., 2012). Various optimization approaches are implemented to improve the data locality: 1) memory is coalesced for continuous CUDA thread data access, 2) register usage is optimized to reduce global memory access, 3) L1 cache or shared memory usage is optimized for data reuse and register savings, and 4) read-only memory is employed to store constant coefficient variables because of read-only cache benefits. In the multi-GPU implementation, each GPU is controlled by an associated single CPU or multiple CPUs. An algorithm-level communication reduction scheme is introduced, adding effective overlap of communication and computation (Zhou et al., 2013). The code was used to perform a 0-10 Hz ground motion simulation on a mesh that included both small-scale fault geometry and media complexity (see Figure 1) and achieved 2.3 Petaflop/s on Titan.

Hercules and Hercules-GPU are two parallel development branches of the same code. The core Hercules code is an octree-based parallel finite element (FE) earthquake simulator developed by the Quake Group at Carnegie Mellon as part of SCEC’s Community Modeling Environment effort (Tu et al., 2006; Taborda et al., 2010). The core code is written in C and uses standard MPI libraries. Hercules-GPU is a recently developed CUDA-based implementation that is maintained within the same Git versioning repository of the core code. Hercules integrates an efficient octree-based hexahedral mesh generator and an explicit FE formulation to solve the linear momentum (partial differential) equation for 3D wave propagation problems in highly heterogeneous media due to earthquake sources modeled with kinematic faulting. The Hercules FE forward solver has second-order accuracy in both time and space.

The SCEC HPC codes have been extensively validated for a wide range of problems, from simple point sources in a half-space to dipping extended faults in 3D crustal models (e.g., Bielak et al., 2010). AWP-ODC is used to perform both forward wave propagation simulations and 3D tomography research. Hercules is used primarily to perform forward wave propagation simulations.

Algorithms and numerical techniques employed (e.g. finite element, iterative solver)

AWP-ODC uses a structured 3D grid with fourth-order finite differences for velocity and stress. Perfectly Matched Layers absorbing boundary conditions (ABCs) are used for the CPU-based code on the sides and bottom of the grid, and a zero-stress free surface boundary condition is used at the top. The GPU-based code uses simple ABCs based on simple ‘sponge layers’, which apply a damping term to the full wavefield inside the sponge layer and are unconditionally stable.

Hercules solves the elastic wave equations by approximating the spatial variability of the displacements and the time evolution with tri-linear elements and central differences, respectively. The resulting scheme has a quadratic convergence rate, in both time and space. The code uses a plane wave approximation of the absorbing boundary condition, and introduces a viscoelastic attenuation mechanism with a memory-efficient combination of generalized Maxwell and Voigt models in the bulk to accomplish a constant Q in frequency. The traction-free boundary conditions at the free-surface are natural in FEM, thus, no special treatment is required in this regard. The solver computes displacements in an element-by-element fashion, scaling the stiffness and lumped-mass matrix-templates according to the material properties and octant edge-size. This approach allows Hercules to considerably reduce memory requirements as compared to standard FEM implementations.

Programming languages, libraries, and parallel programming system used

AWP-ODC is written in Fortran 90. Message passing is done with the Message Passing Interface (MPI). The nearest-neighbor communication pattern of this application benefits most from 3D torus network topology due to its memory access locality, but also proven to be efficient on hierarchical network topologies with high bisection bandwidth and supporting good nearest-neighboring communication. AWP-ODC I/O is done using MPI-I/O, and the velocity output data are written to a single file.

AWP-ODC-GPU, written in C/CUDA/MPI, has two versions that provide equivalent results. The standard version, referred to as AWP, can efficiently calculate ground motions at many sites, as required by the High-F project. The SGT calculation version, referred to as AWP-SGT, can efficiently calculate ground motions from many single-site ruptures (e.g. CyberShake). The GPU code is capable of reading in a large number of dynamic sources and petabytes of heterogeneous velocity mesh inputs. MPI-IO is used to write the simulation outputs to a single file concurrently. This works particularly well as more memory is available on CPUs to allow effective aggregation of outputs in CPU memory buffers before being flushed. The code supports run-time parameters to select a subset of the data by skipping mesh points as needed.

Hercules is self-contained parallel software using a MPI communication protocol with excellent scalability and portability. It implements an end-to-end approach that combines the processing of input data for the earthquake source generation, the generation of an unstructured hexahedral mesh, forward explicit finite element solving, and I/O operations, all in the same code. At its core it uses elements of the etree library (Tu et al., 2003), which combines a database approach to managing large datasets. The etree library components are primarily used for manipulating the material model in the mesh generation stage, and for the mesh generation process itself. It guarantees fast query/retrieval transactions between the program and the model (a file of size ranging 100–800 Gb), and in the meshing it helps to order the mesh components (nodes and elements) in tables with a z-order in memory.

Hercules-GPU is a CUDA-based implementation. Specifically, the stiffness contributions, attenuation contributions of the BKT model, and the displacement updates have been implemented entirely on the accelerator using the CUDA SDK. These operations comprise the majority of the physics calculations performed by the solver when determining the solution to the linear momentum equations. Hercules-GPU computes, at runtime, the optimum kernel launch parameter configuration for the current compute capability. This serves to maximize occupancy. The software also supports an arbitrary number of GPU devices per compute node, and utilizes a simple load balancing scheme to assign each host CPU to a GPU device on the compute node. The Hercules-GPU implementation has been successfully executed on Titan. The largest production problem size tested thus far was a 1 Hz simulation of the 2008 Chino Hills earthquake, comprised of 70M finite elements and 120K time steps, with attenuation (BKT damping) enabled, using 128 Titan XK7 nodes (2048 CPU cores and 128 GPUs). We have also run several scalability tests using a 2.8 Hz simulation of the same event, comprised of 1.5 billion finite elements for 2000 time steps. In the case of the production run, the GPU calculated displacements were each within 1.0 x 10-13 meters of the reference displacements in absolute terms, with a maximum percentage difference of 0.1 percent for any single displacement with respect to the reference CPU implementation. This represents a very high level of agreement and validates the GPU approach. The acceleration ratio of the GPU implementation with respect to the CPU is of a factor of about 2.5x.

Use of Resources Requested

Our simulation plan defines a set of simulations that will represent a series of significant scientific and computational advances in physics-based ground motion modeling. Each simulation goal will require both scientific and computational advances.

Table 3 provides a break down of eight large-scale simulation categories that we propose to perform using INCITE resources during our two-year allocation period. Each simulation represents a scientific study, usually based on running multiple milestone calculations as part of a production research calculation. The table also includes a description of the simulation representative of the milestones we are meeting with each category, and the scientific leader of each sub-group with responsibility for running the simulations.

The estimates shown in Table 3 are based on measured performance on INCITE resources for AWP-ODC on Mira, for AWP-ODC-GPU on Titan. Hercules-GPU estimates are based on performance measured on Titan.

Table 3: SCEC INCITE Simulation Plans and Resource Estimates

[pic]

Details about each of these simulation goals and how the computational estimates were made are provided below.

G1: In the first stage (G1), we will carry out F3DT at 0.2 Hz on Mira. The model obtained from SCEC’s CyberShake 3.0 allocation will be used as the starting model for the inversion. The simulation mesh is composed of ~688 million grid points. Each simulation uses 5,182 cores for 0.5 hours. We have collected Ns = 232 seismic sources in the tomography region. We will adopt the adjoint method for the tomographic inversion at this stage. In the adjoint method, each iteration takes 4 Ns simulations, which amounts to 2.4 million core-hours per iteration. We plan to carry out 20 iterations at this stage and the total estimated computational cost is around 48 M core-hours. For each seismic source, the amount of disk storage is around 0.56 TB. Considering all 232 seismic sources, the total amount of disk storage needed for the duration of the inversion on Mira is around 129 TB.

G2: Investigate the impact of alternative velocity models with stochastic material perturbations by using AWP-ODC-GPU to run 2-Hz simulations using 169-billion mesh elements (regional model including all the major basins in the Greater Los Angeles area), running for 140K time steps. This requires 35,000 XK7 node-hours per run, or 12.8 M INCITE SUs for the planned eight runs. Outputs are decimated, total 120 TB storage space are required.

G3: These simulations will evaluate different aspects of the structural representation used in forward wave propagation simulation, including the seismic velocities drawn from community velocity models (CVMs), chosen attenuation (Q) models (including frequency dependency), material plasticity, and surface topography. Hercules-GPU will be used to run simulations up to 4 Hz, with a minimum shear wave velocity of 200 m/s in domains of the order of 180 km x 135 km x 60 km for the Greater Los Angeles area. Depending on the chosen CVM, this corresponds to unstructured finite-element meshes between 5 and 15 billion elements. We will use different CVM and Q models to perform verification and validation runs of scenario and past earthquakes. Some simulations will include the effects of material plasticity for near-surface soft-soil deposits. We estimate these runs to be on the order of 100K simulation time-steps, based on performance tests done on Titan under a earlier INCITE allocation, we estimate each run to require up to 0.75M SUs.

G4: Probabilistic seismic hazard curves will be calculated using the CyberShake computational platform, to a maximum deterministic frequency of 1.0 Hz, using the GPU version of AWP-SGT. The simulation mesh contains approximately 10 billion mesh points. Each SGT calculation will require 40K timesteps, and SGT timeseries data will be saved for approximately 6 million mesh points. To produce a hazard map for Southern California, we plan to calculate curves for 300 sites, requiring a total of 99 M CPU-hours and 10 TB of storage for seismogram timeseries data.

G5: In the second inversion stage (G5) an inversion using the outcome of G1 as the starting model will be carried out at 0.5 Hz on Mira. The simulation mesh is composed of around 5.5 billion grid points. Each simulation uses 40,000 CPUs for around 0.5 hours (28 minutes). We will still use the Ns = 232 seismic sources collected in the previous stage. Each adjoint iteration will cost around 19M CPU-hours. We plan to carry out 5 iterations, which amounts to 89 M INCITE core-hours. For each seismic source, the amount of disk storage is around 1.9 TB. Considering all the seismic sources, the total amount of disk storage needed for the duration of the inversion on Mira is around 441 TB.

G6: 10-Hz simulation will require a mesh with 43.3 billion elements, Δt = 0.0003 seconds, with an estimate of 3.8 M SUs per run. These simulations correspond to a volume domain of 180 km x 135 km x 62 km that includes all the major basins in the Greater Los Angeles area. We also use a smaller domain of 82 km x 82 km x 41 km to perform a set of simulations including surface topography.

G7: This is an extension of G3 activities to simulations at a resolution of 8+ Hz. We estimate that simulations at this resolution level will require finite element meshes of the order of ~70 billion elements and ~350K time-steps. This will correspond to an estimate of 5 M SUs per run, and we expect to do up to 4 simulations of this size. In this second stage of G3 activities we will put increased emphasis on the combination of the different aspects studied at G3. That is, we will combine preferred velocity models and a chosen Q model with the effects of plasticity or topography in single simulations.

G8: This is an extension of G4 activities to probabilistic seismic hazard curves at a frequency of 1.5 Hz. The simulation mesh contains 83 billion mesh points. Each SGT calculation will require 80K timesteps, and SGT timeseries data will be saved for approximately 6 million mesh points. Calculating hazard curves for 200 sites will require 132 M SUs and 15 TB of disk storage.

Parallel Performance

The CPU-based AWP-ODC code has demonstrated super-linear strong scaling speedup to full Jaguar system, and was used to conduct a full dynamic simulation of a magnitude-8 earthquake on the southern San Andreas fault up to 2-Hz in 2010, with 220 Teraflop/s sustained performance using 223K Jaguar XT5 cores at the time (Cui et al., 2010). This code is recently measured with 653 Teraflop/s on Cray XE6 Blue Waters. AWP-ODC also demonstrated the outstanding performance on ALCF Mira, and achieved 104 Teraflop/s on 32,768 Mira cores (Figure 3, left). The benchmarks were based on pure MPI code without hybrid or QPX implementation. Further improvements are planned.

AWP-ODC-GPU has achieved perfect weak scalability on Titan, achieving a sustained 2.3 Petaflop/s and 100% parallel efficiency up to 8,192 GPUs. The GPU-powered SGT code attained a performance improvement of a factor of 3.7 on XK7 compared to the CPU code running on XE6 at a node-to-node level.

[pic]

Hercules has historically shown very good strong and weak scalability on various HPC systems such as NICS’s Kraken, NCSA’s Blue Waters, and ALCF Mira, and initial scalability tests confirm very good performance for Hercules-GPU on OLCF’s Titan (Figure 3, right). Hercules has been tested with capabilities up to 10-Hz, and is expected to create unstructured meshes of up to ~70 billion finite-elements. The largest tests done for weak scalability (not shown here for brevity) corresponds to a mesh for a simulation up to 10.8 Hz, with about 56 billion elements. The Hercules-GPU solver demonstrated very good scalability up to 4,096 XK7 nodes on Titan, with a speedup of 2.5 times over the CPU solver, indicating that it is a cost effective code for that system.

Data management and workflow development will be a joint effort among multiple SCEC researchers, in collaboration with ORNL Klasky’s ADIOS group, and Jeroen Tromp of Princeton. A self-describing, platform-independent, adaptable seismic data format (ASDF) is being developed with ORNL ADIOS group. This effort is supported by NSF funded SCEC community I/O library in parallel to unify the seismic data format and workflow automation, which will provide the necessary abstractions that enable effective usage of computational resources.

Development Work

Our proposed research will use well-established computational codes discussed above. We anticipate continued development of all of these codes during the INCITE allocation period. AWP-ODC-GPU will be modified to evaluate impact of plasticity and frequency-dependent attenuation models.

UCVM Toolbox. UCVM provides a standard interface to alternative 3D community velocity models, which are key inputs to the simulation processes. This software will be used to generate velocity models with small-scale heterogeneities, and to integrate results from tomography research back into the original velocity models. Currently, software tools to generate small-scale heterogeneities are written in Matlab and must be converted to C to be used by UCVM. Once this is accomplished, UCVM can be used to generate meshes with equivalent properties for both regular meshes used by AWP-ODC, and etree-based meshes for Hercules. UCVM has been successfully used on various NSF and XSEDE computing facilities, and we expect no problems using it on DOE machines.

I/O and Fault Tolerance. We will work with OLCF to tune our community I/O library with support from ADIOS group to meet the community needs for high performance, flexible and dynamic runtime adaptable I/O solutions. The OLCF ADIOS group is also helping to tune the workflow for tomographic calculations. We will increase the resilience of SCEC applications to system level failures, and add ADIOS-based checkpointing restart capabilities to AWP-ODC-GPU.

Development of Discontinuous Mesh AWP. We are developing a new structured discontinuous mesh AWP-DM that will allow us to include the near-surface low-velocity material in simulations of large earthquakes up to higher frequencies than previously possible. We plan to use this approach in High-F simulations of the ShakeOut scenario at frequencies of 4-Hz or higher. These simulations will be a key effort to reach our goal of outer/inner scale ratio of 1020, representing nearly four orders of magnitude increase in computational requirements compared to previous large-scale simulations on Titan. While 3D DM codes already have been implemented by other modelers, the proposed AWP-DM code development arises from the need for excellent parallel scalability and load balancing. We already have a working scalar DM version of AWP-ODC, which shows excellent match to the version using a single equi-spaced grid. Remaining work includes parallel and AWP-ODC-DM code optimization, and further validation in fully 3D media.

Preparing SCEC HPC applications for next-generation systems. We will work closely with OLCF staff to explore the opportunities of heterogeneous computing using both CPUs and GPUs. The particular challenge lies in optimizing communications between CPU and GPU sub-domains, which will be aligned to minimize communication. SCEC is interested in early access to DOE’s next-generation prototype hardware in the next 1-2 years, and will work with vendor experts, facility staff, and compiler developers to port and tune SCEC HPC applications for next-generation systems.

Enabling Workflows on INCITE systems. SCEC computational platforms require the execution of complex workflows that challenge the most advanced DOE high performance computing systems. SCEC has developed, in collaboration with USC ISI, high-throughput workflow management tools to support those platforms, in particular the CyberShake project. These tools include Globus, HTCondor, and Pegasus-WMS. These tools have been successfully deployed to many NSF HPC systems to manage the data and job dependencies of multiple hazard map calculations, which combine the execution of massively parallel SGT calculations with hundreds of millions of loosely coupled post-processing jobs. Looking ahead, we plan to increase the frequency of the model from 0.5 Hz to 1.5 Hz, which will require simulations with 27x the mesh points and 3x the time steps. An overview of our CyberShake workflow is shown in Figure 4. In order for SCEC to reach our computational goals in this proposal, we must develop scientific workflow capabilities on INCITE resources. We specifically plan to develop workflow capabilities on Titan, in collaboration with INCITE resource providers, in order to complete our CyberShake research.

Topology-mapping Tuning. The nearest-neighboring communication pattern of our HPC codes benefits most from 3D torus network topology due to its memory access locality. AWP-ODC has been tuned on NCSA Blue Waters in topology-mapping using Cray topaware technique to match the virtual 3D Cartesian mesh topology of the typical mesh proportion to an elongated physical subnet prism shape in the Blue Waters torus. We plan to work with OLCF and Cray staff to enable this feature for our large-scale simulations on Titan (Figure 5).

d. References

Anderson, J., “Quantitative measure of the goodness-of-fit of synthetic seismograms,” Proc. 13th World Conf. Earthquake Eng., Paper 243, Int’l. Assoc. Earthquake Eng., Vancouver, British Columbia, Canada, 2004.

Anderson, J. G., and J. N. Brune, “Probabilistic seismic hazard analysis without the ergodic assumption,” Seism. Res. Lett. 70, 19–28, 1999.

Baker, J. W., N. Luco, N. A. Abrahamson, R. W. Graves, P. J. Maechling, and K. B. Olsen, “Engineering uses of physics-based ground motion simulations,” Tenth U. S. Conference on Earthquake Engineering, Anchorage, AK, July 21-25, 2014.

Baker, J. W., and C. A. Cornell, “Uncertainty propagation in probabilistic seismic loss estimation,” Structural Safety, 30, 236–252, 2008, doi:10.1016/j.strusafe.2006.11.003.

Bielak, J., H. Karaoglu, and R. Taborda, “Memory-efficient displacement-based internal friction for wave propagation simulation,” Geophysics, 76, no. 6, 131–145, 2011, doi:10.1190/geo2011-0019.1.

Bielak, J., R. W. Graves, K. B. Olsen, R. Taborda, L. Ramírez-Guzmán, S. M. Day, G. P. Ely, D. Roten, T. H. Jordan, P. J. Maechling, J. Urbanic, Y. Cui, and G. Juve, “The ShakeOut earthquake scenario: verification of three simulation sets,” Geophys. J. Int’l., 180, no. 1, 375–404, 2010, doi:10.1111/j.1365-715 246X.2009.04417.x.

Bielak, J., Y. Cui, S. M. Day, R. Graves, T. Jordan, P. Maechling, K. Olsen, and R. Taborda, High frequency deterministic ground motion simulation (High-F project plan), Southern California Earthquake Center, 2012, Available: (March 2014).

Böse, M., R. Graves, D. Gill, S. Callaghan and P. Maechling, “CyberShake-derived ground-motion prediction models for the Los Angeles region with application to earthquake early warning,” Geophys. J. Int’l., in revision, 2014.

Budnitz, R. J., G. Apostolakis, D. M. Boore, L. S. Cluff, K. J. Coppersmith, C. A. Cornell, P. A. Morris, Senior Seismic Hazard Analysis Committee; Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts, U.S. Nuclear Regulatory Commission, U.S. Dept. of Energy, Electric Power Research Institute; NUREG/CR-6372, UCRL-ID-122160, vol. 1-2, 1997.

Callaghan, S., P. Maechling, P. Small, K. Milner, G. Juve, T. H. Jordan, E. Deelman, G. Mehta, K. Vahi, D. Gunter, K. Beattie, and C. Brooks, “Metrics for heterogeneous scientific workflows: a case study of an earthquake science application,” Int’l. Journal of High Performance Computing Applications, 25: 274, 2011, doi:10.1177/1094342011414743.

Christen, M., O. Schenk, and Y. Cui, “PATUS: parallel auto-tuned stencils for scalable earthquake simulation codes,” SC’12, Salt Lake City, UT, Nov 10-16, 2012.

Computational Science: Ensuring America’s Competitiveness, The President’s Information Technology Advisory Committee (PITAC) 2005.

Couvares, P., T. Kosar, A. Roy, J. Weber, and K Wenger, “Workflow in Condor,” Workflows for e-Science, Springer Press, January 2007, ISBN: 1-84628-519-4.

Cui, Y., R. Moore, K. B. Olsen, A. Chourasia, P. Maechling, B. Minster, S. Day, Y. Hu, J. Zhu, A. Majumdar, and T. H. Jordan, “Towards petascale earthquake simulations,” Acta Geotechnica, Springer, 2008, doi:10.1007/s11440-008-0055-2.

Cui, Y., K. B. Olsen, T. H. Jordan, K. Lee, J. Zhou, P. Small, D. Roten, G. P. Ely, D. K. Panda, A. Chourasia, J. Levesque, S. M. Day, and P. J. Maechling, “Scalable earthquake simulation on petascale supercomputers,” Proc. of the 2010 ACM/IEEE Int’l. Conference for High Performance Computing, Networking, Storage, and Analysis, New Orleans, LA, Nov. 13-19, 2010, doi:10.1109/SC.2010.45 (ACM Gordon Bell Finalist).

Cui, Y., E. Poyraz, K. B. Olsen, J. Zhou, K. Withers, S. Callaghan, J. Larkin, C. Guest, D. Choi, A. Chourasia, Z. Shi, S.M. Day, J. P. Maechling, and T. H. Jordan, “Physics-based seismic hazard analysis on petascale heterogeneous supercomputers,” SC’13, Denver, CO, Nov 17-22, 2013 (2013a).

Cui, Y., E. Poyraz, S. Callaghan, P. Maechling, P. Chen, and T. Jordan, “Accelerating CyberShake calculations on XE6/XK7 platforms of Blue Waters,” Extreme Scaling Workshop 2013, August 15-16, Boulder, 2013 (2013b).

Deelman, E., G. Singh, M.-H. Su, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, K. Vahi, G. B. Berriman, J. Good, A. Laity, J. C. Jacob, and D. S. Katz, “Pegasus: a framework for mapping complex scientific workflows onto distributed systems,” Scientific Programming, 13(3), 219-237, 2005.

Ely, G. P., S. M. Day, and J.-B. Minster, “Dynamic rupture models for the southern San Andreas fault,” Bull. Seism. Soc. Am., 100 (1), 131-150, 2010, doi:10.1785/0120090187.

Federal Plan for High-end Computing: Report of the High-end Computing Revitalization Task Force, Executive Office of the President, Office of Science and Technology Policy, 2004.

FEMA, “HAZUS 99, estimated annualized earthquake losses for the United States,” Federal Emergency Management Agency Report 366, Washington, D.C., 32, September, 2000.

Graves, R., B. Aagaard, K. Hudnut, L. Star, J. Stewart, and T. H. Jordan, “Broadband simulations for Mw 7.8 southern San Andreas earthquakes: ground motion sensitivity to rupture speed,” Geophys. Res. Lett., 35, L22302, 2008, doi:10.1029/2008GL035750.

Graves, R., T. H. Jordan, S. Callaghan, E. Deelman, E. Field, G. Juve, C. Kesselman, P. Maechling, G. Mehta, K. Milner, D. Okaya, P. Small, and K. Vahi, “CyberShake: a physics-based seismic hazard model for Southern California,” Pure and Applied Geophysics, 168(3), 367-381, 2011.

Goldstein, M., “Observables and models: exchangeability and the inductive argument,” Bayesian Theory and Its Applications, ed. P. Damien, P. Dellaportas, N. G. Olson, and D. A. Stephens, Oxford, 3-18, 2013.

Goulet, C., “Summary of a large scale validation project using the SCEC broadband strong ground motion simulation platform,” Seism. Res. Lett., SSA Annual Mtg, Anchorage, AK, 2014.

Hauksson, E., and P. M. Shearer, “Attenuation models (QP and QS) in three dimensions of the southern California crust: inferred fluid saturation at seismogenic depths,” J. Geophys. Res., 111, B05302, 2006, doi:10.1029/2005JB003947.

He, Xuebin, et al., “Cross-layer semantics- and application-aware systems support for data analytics and visualization,” submitted to BigData program, June 9, 2014.

Isbiliroglu, Y., R. Taborda, and J. Bielak, “Coupled soil-structure interaction effects of building clusters during earthquakes,” Earthquake Spectra, 2013, doi:10.1193/102412EQS315M.

Komatitsch, D., and J. Tromp, “Spectral-element simulations of global seismic wave propagation—I. Validation,” Geophys. J. Int’l., 149, 390-412; “Spectral-element simulations of global seismic wave propagation—II. Three-dimensional models, oceans, rotation and self-gravitation,” Geophys. J. Int’l., 150, 308–318, 2002.

Langou, J., Z. Chen, G. Bosilca, and J. Dongarra, “Recovery patterns for iterative methods in a parallel unstable environment,” SIAM Journal on Scientific Computing, 30(1), 102-116, 2007.

Lee, E., P. Chen, and T. H. Jordan, “Testing waveform predictions of 3D velocity models against two recent Los Angeles earthquakes,” submitted to Seism. Res. Lett., 2014.

Maechling, P., F. Silva, S. Callaghan, and T. H. Jordan, “SCEC broadband platform: system architecture and software implementation,” submitted to Seism. Res. Lett., May 2014.

McQuinn, E., A. Chourasia, J. H. Minster, and J. Schulze, “GlyphSea: interactive exploration of seismic wave fields using shaded glyphs,” AGU Fall Mtg., Abstract IN23A-1351, San Francisco, CA, 13-17 Dec, 2010.

Mu, D., P. Chen, and L. Wang, “Accelerating the discontinuous Galerkin method for seismic wave propagation simulations using the graphic processing unit (GPU) – single-GPU implementation,” Computers and Geosciences, 51, 282-292, 2013.

National Research Council, Getting Up to Speed: The Future of Supercomputing, The National Academies Press , Washington, DC, 2004.

Olsen, K. B., and R. Takedatsu, “The SDSU broadband ground motion generation module BBtoolbox version 1.5,” submitted to Seism. Res. Lett., May 2014.

Olsen, K. B., R. J. Archuleta, and J. R. Matarese, “Three-dimensional simulation of a magnitude 7.75 earthquake on the San Andreas fault,” Science, 270, 1628-1632, 1995.

Olsen, K. B., S. M. Day, and C. R. Bradley, “Estimation of Q for long period (>2 s) waves in the Los Angeles Basin,” Bull. Seism. Soc. Am., 93, 627-638, 2003.

Olsen, K. B., and J. E. Mayhew, “Goodness-of-fit criteria for broadband synthetic seismograms, with application to the 2008 Mw 5.4 Chino Hills, California, earthquake,” Seism. Res. Lett., 85, 5, 715–723, 2010, doi:10.1785/gssrl.81.5.715.

Pasyanos, M. E., “A lithospheric attenuation model of North America,” Bull. Seism. Soc. Am., 103, 1-13, 2013, doi:10.1785/0120130122.

Peter, D., D. Komatitsch, Y. Luo, R. Martin, N. Le Goff, E. Casarotti, P. Le Loher, F. Magnoni, Q. Liu, C. Blitz, T. Nissen-Meyer, P. Basini, and J. Tromp, “Forward and adjoint simulations of seismic wave propagation on fully unstructured hexahedral meshes,” Geophys. J. Int’l., 186, 721-739, 2011.

Petersen, M. D., A. D. Frankel, S. C. Harmsen, C. S. Mueller, K. M. Haller, R. L. Wheeler, R. L. Wesson, Y. Zeng, O. S. Boyd, D. M. Perkins, N. Luco, E. H. Field, C. J. Wills, and K. S. Rukstales, Documentation for the 2008 update of the United States national seismic hazard maps, U. S. Geological Survey Open-File Report, 2008-1128, 2008.

Porter, K., L. Jones, D. Cox, J. Golkz, K. Hudnut, D. Mileti, S. Perry, D. Ponti, M. Reichle, A. Z. Rose, et al., “The ShakeOut scenario: a hypothetical Mw7.8 earthquake on the southern San Andreas fault,” Earthquake Spectra, 27(2), 239-261, 2011.

Poyraz, E., H. Xu, and Y. Cui, “Application specific I/O optimizations on petascale supercomputers,” ICCS’14, Cairns, Australia, June 10-12, 2014.

Restrepo, D., and J. Bielak, “Virtual topography–a fictitious domain approach for analyzing sur-face irregularities in large-scale earthquake ground motion simulation,” Int’l. J. Num. Meth. Eng., in revision, 2014.

Restrepo, D., R. Taborda, and J. Bielak, “Effects of soil nonlinearity on ground response in 3D simulations — an application to the Salt Lake City basin,” Proc. 4th IASPEI/IAEE Int’l. Symp., Effects of Surface Geology on Seismic Motion, University of California, Santa Barbara, CA, August 23–26, 2012.

Restrepo, D., R. Taborda, and J. Bielak, “Simulation of the 1994 Northridge earthquake including nonlinear soil behavior,” Proc. SCEC Annual Mtg., Abstract GMP-015, Palm Springs, CA, September 9–12, 2012 (2012b).

Reynolds, C. J., S. Winter, G. Z. Terstyanszk, and T. Kiss, “Scientific workflow makespan reduction through cloud augmented desktop grids,” Third IEEE Int’l. Conference on Cloud Computing Technology and Science, 2011.

Roten, D., K. B. Olsen, S. M. Day, Y. Cui, and D. Fah, “Expected seismic shaking in Los Angeles reduced by San Andreas fault zone plasticity,” Geophys. Res. Lett., 41, 2014, doi:10.1002/2014GL059411.

Rynge, M., G. Juve, K. Vahi, S. Callaghan, G. Mehta, P. J. Maechling, and E. Deelman, “Enabling large-scale scientific workflows on petascale resources using MPI master/worker,” XSEDE’12, Article 49, 2012.

Savran, W., and K. B. Olsen, “Deterministic simulation of the Mw Chino Hills event with frequency-dependent attenuation, heterogeneous velocity structure, and realistic source model,” Seism. Res. Lett., SSA Annual Mtg., Anchorage, AK, 2014.

ShakeOut Special Volume, Earthquake Spectra, vol. 27, no. 2, 2011.

Shi, Z., and S. M. Day, “Ground motions from large-scale dynamic rupture simulations (poster 023),” SCEC Annual Mtg., Palm Springs, CA, 2012 (2012a).

Shi, Z., and S. M. Day, “Characteristics of ground motions from large-scale dynamic rupture simulations,” AGU Fall Mtg., Abstract S14A-06, San Francisco, California, 3-7 Dec, 2012 (2012b).

Shi, Z, and S. M. Day, “Validation of dynamic rupture simulations for high-frequency ground motion,” Seism. Res. Lett., vol. 84, 2013 (2013a).

Shi, Z., and S. M. Day, “Rupture dynamics and ground motion from 3-D rough-fault simulations,” Journal of Geophysical Research, 118, 1–20, 2013, doi:10.1002/jgrb.50094 (2013b).

Strasser, F. O., N. A. Abrahamson, and J. J. Bommer, “Sigma: issues, insights, and challenges,” Seism. Res. Lett. 80, 40–56, 2009.

Stein S., R. Geller, and M. Liu, “Why earthquake hazard maps often fail and what to do about it,” Tectonophysics, 562-563, 1-25, 2012, doi:10.1016/j.tecto.2012.06.047.

Taborda, R., and J. Bielak, “Large-scale earthquake simulation — computational seismology and complex engineering systems,” Comput. Sci. Eng., 13, 14–26, 2011, doi:10.1109/MCSE.2011.19.

Taborda, R., and J. Bielak, “Ground-motion simulation and validation of the 2008 Chino Hills, California, earthquake,” Bull. Seism. Soc. Am., 103, no. 1, 131–156, 2013, doi:10.1785/0120110325.

Taborda, R., J. Bielak, and D. Restrepo, “Earthquake ground motion simulation including nonlinear soil effects under idealized conditions with application to two case studies,” Seism. Res. Lett., 83, no. 6, 1047–1060, 2012, doi:10.1785/0220120079.

Taborda, R., J. López, H. Karaoglu, J. Urbanic, and J. Bielak, Speeding up finite element wave propagation for large-scale earthquake simulations, CMU-PDL-10-109, 2010, Available: .

The Opportunities and Challenges of Exascale Computing, Summary Report of the Advanced Scientific Computing Advisory Committee (ASCAC) Subcommittee on Exascale Computing, Department of Energy, Office of Science, Fall 2010.

Tu, T., H. Yu, L. Ramírez-Guzmán, J. Bielak, O. Ghattas, K.-L. Ma, and D. R. O’Hallaron, “From mesh generation to scientific visualization: an end-to-end approach to parallel supercomputing,” SC’06, Proc. of the 2006 ACM/IEEE Int. Conf. for High Performance Computing, Networking, Storage and Analysis, Tampa, Florida, 15, 2006, Available: (March 2014), doi:10.1145/1188455.1188551.

Wang, F., and T. H. Jordan, “Comparison of probabilistic seismic hazard models using averaging-based factorization,” Bull. Seism. Soc. Am., in press, 2014.

Withers, K., K. B. Olsen, Z. Shi, S. M. Day, and R. Takedatsu, “Deterministic high-frequency ground motions from simulations of dynamic rupture along rough faults,” Seism. Res. Lett., 84:2, 334, 2013 (2013a).

Withers, K., K. B. Olsen, and S. M. Day, “Deterministic high-frequency ground motion using dynamic rupture along rough faults, small-scale media heterogeneities, and frequency-dependent attenuation”, SCEC Annual Mtg., (Abstract 085), Palm Springs, CA, 2013 (2013b).

Zhou, J., D. Unat, D. Choi, C. Guest, and Y. Cui, “Hands-on performance tuning of 3D finite difference earthquake simulation on GPU fermi chipset,” Proc. of Int’l. Conference on Computational Science, vol. 9, 976-985, Elsevier, ICCS’2012, Omaha, NE, June 4-6, 2012.

Zhou, J., Y. Cui, E. Poyraz, D. Choi, and C. Guest, “Multi-GPU implementation of a 3D finite difference time domain earthquake code on heterogeneous supercomputers,” Proc. of Int’l. Conference on Computational Science, Vol. 18, 1255-1264, Elsevier, ICCS’2013, Barcelona, Spain, June 5-7, 2013.

-----------------------

Figure 4. CyberShake workflow. Circles indicate computational modules and rectangles indicate files and databases. We have automated these processing stages using Pegasus-WMS software to ensure the processes with data dependencies run in the required order. Workflow tools also increase the robustness of our calculations by providing automated job submission, error detection, and restart capabilities that enable us to restart a partially completed workflow from an intermediatepoint..

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download