Author Guidelines for 8



NO TITLE BioGrid 1Molecular Structure Determination on a Computational & Data Grid

Mark L. Green,2 and Russ Miller

Center for Computational Research, University at Buffalo

Department of Computer Science and Engineering

State University of New York, Buffalo, NY 14260, University at Buffalo

mlgreen@ccr.buffalo.edu, miller@buffalo.edu

Abstract

Add laterThe focus of this paper is on the design and implementation of a critical program in structural biology onto two computational and data grids. The first is the Buffalo-based ACDC grid, which uses facilities at SUNY-Buffalo and several research institutions that are under distinct control. The second is Grid2003, the iVDGL grid established late in 2003 primarily for physics and astronomy applications. In this paper, we present an overview of the ACDC Grid and Grid2003, focusing on the implementation of several new ACDC computational and grid tools.

1. Introduction

The In order to implement a proof-of-concept ACDC-Grid [11,20,21] is a proof-of-concept grid implemented in Buffalo, NY. The driving structural biology application provides a cost-effective solution to the problem of determining molecular structures from X-ray crystallographic data via the Shake-and-Bake direct methods procedure. SnB [1ref], a computer program based on the Shake-and-Bake method [2,3ref], we consider as an application a cost-effective solution to the problem of determining molecular crystal structures via direct methods as implemented in a grid setting. We use the SnB computer program, which is based on the Shake-and-Bake method of molecular structure determination, as the prototype application for the template design presented in this paper. Shake-and-Bake was developed in Buffalo and is the program of choice for structure determination in many of the 500 laboratories that have acquired it [4,5,6]. In addition, the SnB program is well understood by the authors, one of whom is a principle author of the Shake-and-Bake methodology and the SnB program. FinallyThis, SnB is a computationally- intensive proceduregram that can take exploitadvantage of the grid’'s ability to present the user with a large computational infrastructure-scale that will allow for the processing of a large number of related molecular trial structures [7,8].desktop or distributed supercomputer in order to perform computations that are equivalent to parameter studies, which are areas that the grid excels at.

The SnB program uses a dual-space direct-methods procedure for determining crystal structures from X-ray diffraction data. This program has been used in a routine fashion to solve difficult atomic resolution structures, containing as many as 1000 unique non-Hydrogen atoms, which could not be solved by traditional reciprocal-space routines. Recently, the focus of the Shake-and-Bake research team has been on the application of SnB to solve heavy-atom and anomalous-scattering substructures of much larger proteins provided that 3-4Å A diffraction data can be measured. In fact, while traditional direct methods had been successfully applied successfully to substructures containing on the order of a dozen selenium sites, SnB has been used to determine as many as 180 selenium sites. Such solutions by SnB have led to the determination of complete structures containing hundreds of thousands of atoms.

The Shake-and-Bake procedure consists of generating structure invariants and coordinates for random-atom trial structures. Each such trial structure is subjected to a cyclical automated procedure that includes computing aa Fourier Transformroutine to determine phase values from a proposed set of atoms (initially random), determiningination of a figure-of-merit [9], refining phases to locally optimize the figure-of-merit, computing a Fourier Transform to produce an electron density map, and employing a peak-picking routine to examine the map and find the maxima. These peaks are then considered to be atoms, and the cyclical process is repeated for a predetermined (by the user)predetermined number of cycles.

Trials are continually and simultaneously processed until a solution is discovered by viewing a histogram of final figure-of-merit values. The running time of this procedure ranges from minutes on PCs to months on supercomputers. Trial structures are continually and simultaneously processed, with For each completed trial structure, the final value of the figure-of-merit values of each structure is stored in a file., The user can review a dynamicand a histogram is during the processing of the trials in order produced to determine whether or not a solution is likely present in the set of completed trial structures. A bimodal distribution with significant separation is a typical indication that solutions are present, whereas a unimodal, bell-shaped distribution typically indicates a set comprised entirely of nonsolutions.

The current premise is that the computing framework for this Shake-and-Bake procedure need not be restricted to local computing resources. Therefore, a grid-based implementation of Shake-and-Bake methodology can afford scientists with limited local computing capabilities the opportunity to solve structures that would be beyond their means.

2. Grid Introduction

The Computational Grid is represents a rapidly emerging and expanding technology that allows geographically distributed resources (CPU cycles, data storage, sensors, visualization devices, and a wide variety of Internet-ready instruments), which are under distinct control, to be transparently linked together in a transparent fashion [10, 22, 23] (see ). The concept of “the grid” is, in many ways, analogous to that of the electrical power grid, where the end user cares only that the computational resource (electricity) is available and not how or where it is performed (produced and transported). The power of the Grid lies not only in the in its ease of use and perhaps most importantly in the total aggregate computing power, data storage, and network bandwidth that can readily be brought to bear on a particular problem, but on its ease of use. The Grid will support remote collaborative common operational sessions to coordinate homeland security activities throughout the US. The Grid environment will enable secure interactions between geographically distributed groups, providing each group with a common operational picture and access to national and local databases. Since resources in a grid are pooled from many different domains, each with its own security protocol, insuring the security of each system on the Grid is of paramount importance.

Grids are now a viable solution to certain computationally- and data-intensive computing problems for the following reasons: (a) The Internet is reasonably mature and able to serve as fundamental infrastructure. (b) Network bandwidth has increased to the point of being able to provide efficient and reliable services. (c) Storage capacity has now reached commodity levels, where one can purchase a terabyte of disk for roughly the same price as a high-end PC. (d) Many instruments are Internet-aware. (e) Clusters, supercomputers, storage and visualization devices are becoming more easily accessible. (f) Applications have been parallelized. (g) Collaborative environments are moving out of the alpha phase of implementation.

For these and other reasons, grids are starting to move out of the research laboratory and into early-adopter production systems. The focus of grid deployment continues to be on the difficult issue of developing high quality middleware.

that can be used to build relatively simple or extremely complex mission critical workflows. Workflows are comprised of processing elements such as signal processing algorithms, database query, or visualization rendering and data elements such as files, databases, or data streams of course many other options are also currently available. Using this system of Grid services that have very well defined standards and protocols for communicating, one can build dynamic, scalable, fault-tolerant, and efficient workflows that can meet the needs of somewhat unpredictable computational challenges.

The Grid community has advanced the efficient usage of grids by defining metrics to measure performance of grid applications and architectures and rate functionality and efficiency of grid architectures. These metrics will facilitate good engineering practices by allowing alternative implementations to be compared quantitatively. Also, they provide grid users with information about systems capabilities so that they can develop and tune their applications towards informed objectives. Therefore, we propose a set of tasks that assess grid performance at the level of user applications. The tasks will demonstrate the:

· dynamic quality of well designed Grid services that have the ability to self-replicate in the event of failure,

· demonstrate the ability to scale the amount of Grid resources utilized to meet Quality of Service constraints,

· simultaneously execute multiple workflows on independent Grid services to provide fault-tolerance and mission critical redundancy, and

· provide efficient solution time while reclaiming unused computational resources within the organization and potential partner institutions.

Many typeskinds of computational tasks are naturally suited tofor grid environments, including data-intensive applications. Therefore, we can characterize existing applications with emerging grid applications to understand and capture their computation needs and data usage patterns. Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, and authorization, etc, for numerous applications. Part of our research effort is targeted at We do not seek to develop new data storage systems, but ratherenabling to make such systems that are more readily accessibleusable within a gindividually or collectively within a Grid framework.

As computers become ubiquitous, ideas for implementation and use Grid computing are developing rapidly and gaining prominence. ButAs Grid computing initiatives move forward, issues of interoperability, security, performance, management, and privacy need to be carefully consideredare very important. In fact, Many of these issues relate to infrastructure, where no value will be gained from proprietary solutions. tThe (SEC) Security Area is concerned with various issues relating to authentication and authorization in Grid environments to insure application and data integrity. They are also generating best practice scheduling and resource management documents, protocols, and API specifications to enable interoperability. Several layers of security, data encryption, and certificate authorities already exist in grid-enabling toolkits such as Globus Toolkit 3 [24].

3. Grid Capabilities

Internet is Infrastructure

Increased network bandwidth and advanced services

Advances in Storage Capacity

Terabyte costs less than $5,000

Internet-Aware Instruments

Increased Availability of Compute Resources

Clusters, supercomputers, storage, visualization devices

Advances in Application Concepts

Computational science: simulation and modeling

Collaborative environments → large and varied teams

Grids Today

Moving towards production; Focus on middleware

34. Advanced Computational Data Center (ACDC) -Grid Development

The development of the Advanced Computational Data Center - Grid (ACDC-Grid) portal development has been designed to befocuses on establishing an extensible and robust Application Programming Interface (API) that uses Grid-enabling Application Templates (GATs) [11] as a foundation. The ACDC-Grid GATs define standard procedures that many scientific applications require when executing in a grid environment. There are several grid metrics and components that are required for defining a GAT that we will be presented in this paper. Fig.1 shows the ADCD web portal, which is First, thethe single point of contact for all ACDC-Grid computational and data grid resources and repositories is defined in the following web portal, Fig. 1.

Grid Portal Development

Globus Integration

Globus version 2.2.4 installed and in production.

Globus preview 4 version 3 being evaluated.

All Metacomputing Directory Service, or MDS information stored in the portal database (need for LDAP eliminated).

Condor and Condor-G used for resource management and Grid job submissions.

[pic]

Figureure 1:1. The ACDC-Grid web portal user interface.

The ACDC-Grid is based on the Globus Toolkit middleware version 2.2.4 and the web portal is served by Apache HTTP Server version 2.0. All of the web portal pages are dynamically created using the PHP hypertext preprocessor scripting language, JavaScript, and real-time MySQL database access. Each web portal page also allows the strict use of security and authentication procedures, defining a fine grained custom interface for each grid user. Several grid user Access Control Levels (ACLs) have been defined for un-authenicatedauthenticated, general, system administrator, gridand grid administrator web portal page access.

The base Globus Toolkit middleware Metacomputing Directory Service (MDS) information is stored in the ACDC-Grid database and can be queried directly or displayed in a Java tree from the web portal, as shown in Fig. 2.

[pic]

Figure 2:2. The ACDC-Grid Metacomputing Directory Service web portal with Java tree view.

Meta-scheduler Integration

Grid Portal scheduler incorporates:

Platform statistics obtained on 1 hour time scale including

load,

running/queued jobs,

backfill availability,

queue schedule,

and production rate.

All statistics are stored in the portal database and can be charted for any historical period.

The ACDC-Grid job monitoring system is designed to be an extremely lightweight and non-intrusive tool for monitoring applications and resources on computational grids. It also provides a historical retrospective of the utilization of such resources, which can be used to track efficiency, adjust grid-based scheduling, and perform a predictive assignment of applications to resources.

[pic]

Figure 3:3. ACDC-Grid historical chart of compute platform jobs completed on all platforms during the period of September 1 through– December 31, 2003 for all platforms.

The ACDC-Grid database aggregates compute platform statistics on an arbitrary time scale (e.g.,i.e. – 5, 15, 30, 60 minutes), including load, running/queued jobs, backfill availability, queue schedule, and compute platform production rate. This information can be queried directly by the user and presented in chart form charted for both historical grid jobs, as shown in Fig. 3, andfor running or queued jobs., Please refer to Fig. 4.

[pic]

Figure 4:4. ACDC-Grid running/queued jobs CPU consumption based on user groups for all resources.

The intelligent management of consumable resources, including computeational cycles, requires accurate up-to-date information. The ACDC job monitoring system provides near real-time snap shots of critical computational job metrics, which are stored in a database and utilizedpresented to the user viaby dynamic web pages that it generates for the user. Jobs are segmented into several classes (e.g., running jobs, queued jobs, and so forthetc.) and statistics for each class are created on-the-fly.

[pic]

Figure 5:5. ACDC-Grid metrics and statistics of jobs currently running on all compute platforms.

As shown in Fig. 5, a variety of critical information is available to the user, including The statistics presented in Fig. 5 includethe total number of currently running jobs, the total CPU hours consumed by these currently running jobs, number of jobs running for the miller group and percentage of miller group jobs running based on total jobs running, the average number of nodes per job, the and percentage of total, average runtime per joband percentage of total, and so forth. average CPU hour production and total, and total CPU production of currently running jobs and the user percentage.Additional information presented includes raw values and the percentage attributed to the user or group of concern. In this case, information is presented for the miller group (i.e., the aggregate of all users in the miller group). It should be noted that from the pull-down menus the user has many options in terms of the data to be presented. For example, the chart can be based on total jobs, total CPU hours, or total runtime, the chart can include information based on running or queued jobs, the resources considered can be a complete grid or a subset of a grid, and the resources can be compiled with respect to a user, group, virtual organization, or queue, to name a few.

Furthermore, all metrics obtained from a given resource areis time stamped so that the age of the information displayed or stored is available. In order to present voluminous amounts of data in a hierarchical fashion, all top- level ACDC Job Monitoring charts have a “drill down” feature that gives the user increasingly more detailed information about the jobs they are interested in. This feature is essential when a post-mortem analysis of historical Grid computational jobs is utilized in order to required for assessing the performance of the Grid (e.g., the number of CPU hours consumed over a given period of time across all available resources). The ability to drill down for additional information is also valuable when querying information and if additional information about running or queued jobs is required. See, Fig. 6.

[pic]

Figure 6:6. ACDC-Grid additional job information dynamically queried from the database.

As shown in Fig. 7,

tThe ACDC-Grid is a heterogeneous collection of compute platforms using several different native queue managers (e.g., OpenPBS, Condor, fork, etc.), a variety of operating systems (e.g., RedHat Linux, IRIX, Windows, etc.), and various Wide Area Network (WAN) connectionsbandwidths (e.g., GigE, Fast Ethernet, T1, etc.), Fig. 7.

The ACDC computational and data grid performance often depends on the availability, latency, and bandwidth of the WAN. Thus, all compute platforms use the Network Weather Service (NWS) [25] for reporting the essential latency and bandwidth statistics to the database. This information can be presented to the user, as shown in Fig. 8. This information is also directly available to the ACDC-Grid GATs in order to efficiently manage the computational and data grids.

[pic]

[pic]

Figure 7:7. ACDC-Grid computational resources.

The ACDC computational and data grid performance often depends on the availability, latency, and bandwidth of the WAN. Thus, all compute platforms use the Network Weather Service (NWS) for reporting the essential latency and bandwidth statistics to the database. This information can be charted, Fig. 8, and queried by the ACDC-Grid GATs for the computational grid and by the ACDC Data Grid infrastructure directly.

[pic]

Figure 8:8. ACDC-Grid network status that can be charted for any given period of time for all compute platforms.

SnB standard user interface can be used to create and post process any Grid job submission.

User secure upload and download facility is provided for staging jobs on and off the Grid.

Current and historical Grid job status is provided by a simple portal database query interface.

User usage profiles are charted up to the minute.

Current Computational Grid statistics can be charted over user defined intervals.

New LDAP Group “cluster” Information

Compute node processor information (types of processors, memory, scratch space, etc.)

Queue information username, nodes, queue, runtime, showbf, showq

Move platform scratch space information into this script (where, size, availability, etc.)

Run a common benchmark on all compute nodes for access compute power across platforms (in-core and out-of-core numbers).

Script will run on hourly time scale on all platforms

Historical record kept for all platforms on hourly time scale.

Metascheduler/Grid Status Information

Need node availability for every platform (# of nodes and time available, # nodes online/down/active/total, etc.)

Need any information that is available that can be used to give Grid statistics on platforms, users, and entire center.

All grid user files are located in local directory structure (grid username used as root).

Quota enforced.

Secure upload/download/storage of files.

All file management is performed thru the Grid Portal File Management Interface.

All Grid-based jobs are submitted thru the Grid Portal Job Management Interface.

All Grid-based jobs are staged to the grid user’s Grid Portal local directory prior to execution/submission.

All grid user’s also have access to Grid Portal local scratch space for exchanging files and/or data with other grid users, etc.

No grid user applications are executed on the Grid Portal and no individual user accounts are created for the grid user.

In an increasing number of scientific disciplines, large data collections are emerging as an important community resources. The ACDC

dataData gGrid ds hasve a significant role to play that inherently complements the ACDC cComputational gGrids in terms of managing and that manipulatinge these data collections.

A data grid denotes a large network of distributed storage resources, such as archival systems, caches, and databases, which are logically linked logically so as to create a sense of global persistence. The goal of the ACDC data grid is to goals is

toTo design and implement transparently manageement of data distributed across heterogeneous resources, providing access via such that the data is accessible via a uniform, (web) interface. In addition, we would also like to We also want to enable

Enable the transparent migration of data between various resources while preserving uniform access for the user.

Maintaining metadata information about each file and its location in a global database table is essential. See, Fig. 9.

[pic]

Figure 9:9. ACDC data grid Java tree view of files.

The hierarchical display does not list the file attribute data, so a list-based display has also been developed that can be used for sorting data grid files based on available metadata (e.g., i.e. – filename, file size, modification time, owner, etc.), as shown in Fig. 10.

[pic]

Figure 10:10. ACDC data grid list-based view of sorted user files.

Currently using MySQL tables.

Periodically migrate files between machines for more optimal usage of resources.B

Implement basic file managementBasic file management functions are available have been made accessible via a platform-independent user-friendly web interface that.

Features which includese u:

User-friendly menus/ interface, f.

Filefile transfer capabilities,uUpload/d Download to and from the ACDC dData gGrid pPortal, a s.

Simple web-baseda simple web-based file editor, an eor.

Efficientan efficient search utility, and the l.

Logical display of files for a given user in three divisions (user/ group/ public). Collating and displaying Gathering and display of statistical information is particularly useful to administrators for optimizing usage of resources. The ACDC data grid infrastructure periodically migrates files between data repositories for optimal usage of resources. The file

Hierarchical vs. List-based

3 divisions: (user/ group/ public)

Sorting capability based on file metadata, i.e. filename, size, modification time, etc.

Support multiple access to files in the data grid.

Implement basic Locking and Synchronization primitives for version control.

Integrate security into the data grid.

Implement basic authentication and authorization of users.

Decide and enforce policies for data access and publishing.

Gather and display statistical information particularly useful to administrators for optimizing usage of resources.

mMigration aAlgorithm

File migration depends upon a number of factors, including the following:

• User access time

• Network capacity at time of migration

• User profile

• User disk quotas on various resources

Further, we have the ability to mine log files, which aids in the determination of the followingWe need to mine log files in order to determine:

• HowThe amount of much data to migrate in one migration cycle?

• The What is an appropriate migration cycle length?

• The file access pattern of a What is a data grid user’s file access pattern?

• TheWhat is the overall access pattern for public or or group files?

How much data to migrate in one migration cycle?

What is an appropriate migration cycle length?

What is a user’s access pattern for files?

What is the overall access pattern for particular files?

Global File Aging vs. Local File Aging

The uUser global file agingfile-aging attribute is i

Indicative of a user’s access across their own files and is an a.

Attribute of a user’s profile.

During migration time, this attribute will determine which user’s files should be migrated off of the grid portal onto a remote resource.

Function of (file age, global file aging, resource usage)

File aging attribute

The local file aging attribute is iIndicative of overall access to/migration activity of a particular file by users having group or public access. This is an a

Attribute ofThe latter is an attribute of the a in file and is stored in the file _management data grid table. During migration time, these attributes will helpare used to determine the which user’s files that are to should be migrated from the grid portal repository to a remote resource repository. Specifically, file migration is a function ofThis is a function of global file aging, local file aging, and resource usage (i.e. –e.g., the previous use of user files on individual compute platforms is a factor in determining file migrationalso taken into consideration). By tracking the file access patterns of all user files and storing this information in the associated database tables, the ACDC data grid infrastructure can automatically determine an effectiveoptimal repository distribution of the data grid files. See , Fig. 11 for a schematic of the physical data ACDC data grid.

[pic]

Figure 11:1. ACDC data grid repository location, network bandwidth, and size.

Scale: 0 to 1 probability of whether or not to migrate file.

File_aging_local_param initialized to 1.

During migration time after a user has been chosen, this attribute will help determine which files of the user to migrate.

i.e. Migrate a maximum of the top 5% of user’s files in any one cycle.

For a given user, the average of the file_aging_local_param attributes of all files should be close to 1.

Operating tolerance before action is taken is within the range of 0.9 – 1.1.

In this way, the user file_aging_global_param can be a function of this average.

If the average file_aging_local_param attribute > 1.1, then files of the user are being held to long before being migrated.

The file_aging_global_param value should be decreased.

If the average file_aging_local_param attribute < 0.9, then files of the user are being accessed at a higher frequency than the file_aging_global_param value.

The file_aging_global_param value should be increased.

Support for multiple access to files in the data grid has been implemented with file locking and synchronization primitives. , an The integrated ACDC data grid also provides security for authentication and authorization of users, as well asincluding policies and facilities for data access and publication and publishing facilitates a secure collaboration environment.

Issues to consider

What is the effect of publishing on file/user aging?

What is the format for a user profile?

When do we update the user file_aging_global_param attribute?

What is the relationship between the user aging attributes of two users? In the same group? In different groups?

The ACDC dData gGrid algorithms are continually evolving to minimize network traffic and maximize disk space utilization on a per user basis. This is accomplished by data mining user usage and disk space requirements in a ubiquitous and automated fashionautomatically.

The SnB grid-enabled data mining application utilizes most of the ACDC-Grid infrastructure presented thus farthat we have presented. A typical For the execution scenario of a typical data mining job the user defines a grid-enabled data mining SnB job usesusing the Grid Portal to supply theweb interface supplying: which molecular structures parameter sets to optimize, the data file metadata, the and grid-enabled SnB mode of operation (dedicated or back fill)l mode, and thegrid-enabled SnB terminationstopping criteria. The Grid Portal then assembles the required SnB application data and supporting files, execution scripts, database tables, and submits jobs for parameter optimization based on the current database statistics. ACDC-Grid job management includes: automatically determinesation theof appropriate execution times, number of trials, and number of processors for each available resource, as well as logging theand status of all concurrently executing resource jobs. In addition, it, automatically incorporatesion theof SnB trial results into the molecular structure database, and initiates post-processing of the updated database for subsequent job submissions., Fig. 12 shows the logical relationship for the SnB grid-enabled data mining routine described..

[pic]

Figure 12:2. ACDC-Grid grid-enabled data mining diagram.

Grant a user access to specific or all ACDC-Grid Portal:

resources,

software, and

web pages.

45. Shake-and-Bake Grid-enabled Data Mining

Problem Statement

Use all available resources in the ACDC-Grid for executing a data mining genetic algorithm optimization of SnB parameters for molecular structures having the same space group.

Grid Enabling Criteria

All heterogeneous resources in the ACDC-Grid are capable of executing the SnB application.

All job results obtained from the ACDC-Grid resources are stored in a corresponding molecular structure databases.

There are two modes of operation and two sets of stopping criteria:

Data mining jobs can be submitted in

a dedicated mode (time critical), where jobs are queued on ACDC-Grid resources, or

in a back fill mode (non-time critical), where jobs are submitted to ACDC-Grid resource that have unused cycles available.

There are two sets of stopping criteria:

Continue submitting SnB data mining application jobs until

the grid-enabled SnB application determines optimal parameters have been found, or

indefinitely (grid job owner determines when optimal parameters have been found).

Execution Scenario

User defines a Grid-enabled data mining SnB job using the Grid Portal web interface supplying:

designate which molecular structures parameter sets to optimize,

data file metadata, and

Grid-enabled SnB mode of operation dedicated or back fill mode, and

Grid-enabled SnB stopping criteria.

The Grid Portal assembles the required SnB application data and supporting files, execution scripts, database tables, and submits jobs for parameter optimization based on the current database statistics.

ACDC-Grid job management includes:

automatic determination of appropriate execution times, number of trials, and number of processors for each available resource,

logging and status of all concurrently executing resource jobs,

automatic incorporation of SnB trial results into the molecular structure database, and

post processing of updated database for subsequent job submissions.

456. Grid2003 Participation Experience

The International Virtual Data Grid Laboratory (iVDGL) is a global Data Grid that provides resources for serves at the forefront of experiments in physics and astronomy [12]. Its computing, storage, and networking resources in the U.S., Europe, Asia, and South America provide a unique computational laboratory that will test and validate Grid technologies at international and global scales. The Grid2003 project [13], Fig. 13, was defined and planned by Stakeholder representatives in an effort to align iVDGL project goals with the computationalsoftware and computing projects associated withof the Large Hadron Collider (LHC) experiments. See Fig. 13.

[pic]

Figure 13:3. Grid2003 project web page site catalog and status.

The Grid Laboratory Uniform Environment (GLUE) [14] collaboration was created in Feb. 2002 to provide a focused effort to achieve interoperability between the U.S. physics Grid projects and the European projects. Participant U.S. projects include (iVDGL, Grid Physics Network (GriPhyN) [26ref:], and and Particle Physics Data Grid (PPDG) [15]. Participant European projects include and the European (European Data Grid (EDG) Project [16], Data TransAtalantic Grid (DataTAG) [17], and CrossGrid [18], etc.) physics Grid projects. Since the initial proposal for the GLUE project, the LHC Computing Grid (LCG) project was created at CERN [19] to coordinate the computing and Grid software requirements for the four LHC experiments, with a goal of developing common solutions. One of the main project goals is deploying and supporting global production Grids for the LHC experiments, which has resulted in the Grid2003 “production” grid.

4.1 1.2 Goals of the Grid2003 Project

The Grid2003 project was defined and planned by Stakeholder representatives in an effort to align iVDGL project goals with the Software and Computing projects of the LHC experiments. The planning process converged during the iVDGL Steering Committee set the following broad goals for at Argonne Laboratory, June 8-9, 2003, with a set of agreed to principles, the Grid2003 project:that the Grid2003 pProject shouldmust:

• · Provide the next phase of the iVDGL Laboratory.



• · Provide the infrastructure and services needed to demonstrate LHC production and analysis applications running at scale in a common grid environment.



• · Provide a platform for computer science technology demonstrators.



• · Provide a common grid environment for LIGO and SDSS applications.

Planning details were iteratively defined, and are available in the iVDGL document server, c.f. Plan V21. The goals of thise project included meeting a set of performance targets, using metrics listed in athe planning document. The central project milestone can be summarized as delivery of a shared, multi-Virtual Organization (VO), multi-application, grid laboratory in which performance targets were pursued through deployment and execution of application demonstrations during the period before, during, and afterfollowing the SC2003 conference in Phoenix (November 16-19). The organization of this project included the creation of teams representing application groupsProject was organized as a broad, evolving team including the application groups, site administrators, middleware developers, and core service providers, and operations. The active period of thethis project, where the people involved were expected to contribute and be responsive to the needs of the project, was a 5-month period of 5 months from July through November 2003. It is interesting to note that sSubsequent to this period, Grid3 remains largely intact, with many applications running. However, there , but there are reduced expectations as to response time to problems and the attention of the members of the team. Grid2003 was coordinated by the iVDGL and PPDG project coordinators.. The Project was able to call on additional effort through the Stakeholder organizations.

The design and configuration of the Grid were driven by the requirements of the applications. The Project included those responsible for installing and runnning the applications, the system managers responsible for each of the processing and storage sites (including U.S. LHC Tier1 Centers, the iVDGL funded prototype Tier2 Centers, resources from physics departments and leveraged facilties from large scale computing centers) as well as the groups responsible for delivery and support of the grid system services and operations. The overall approach of the project was “end-to-end” in terms of giving equal attention to the application, organization, site infrastructure and system services needed to achieve science applications running on a shared grid.

The applications running on Grid3 included official releases corresponding to production environments that the experiments will use in run in production and analysis over the next year. Applications from the computer science research groups (GridFTP, Exerciser, Netlogger) were used to explore the performance of different aspects of the Grid.

The project plan included basic principles of the project, which contributed to making life simpler and more flexible. In particular the decisions to: have dynamic installation of applications; not presume the installation and configuration of the “worker” processing nodes; use existing facilities and batch systems without reinstallation of the software; all contributed to the success of the project.

The active period of the project, where the people involved were expected to contribute and be responsive to the needs of the project, was a period of 5 months from July through November 2003. Subsequent to this Grid3 remains, with many applications running, but there are reduced expectations as to response time to problems and the attention of the members of the team. The collaborative organization of the project allowed us to address problems as they arose and focus our efforts in response to unanticipated issues. The team took decisions and was flexible enough to accept additional sites and applications into the project as it evolved. Identifying people with coordination roles has helped the project to scale in size and complexity. These roles were filled by responsibles from their respective projects, including: Sites (iVDGL operations team), Applications (iVDGL applications coordinator, with liaisons from each VO’s Software and Computing project), Monitoring (Grid Telemetry), Operations (iVDGL operations) and Troubleshooting (VDT). The Project was coordinated by the iVDGL and PPDG project coordinators.

As stated, the Grid2003 Project was organized to meet several strategic project goals, including building the “next phase” of the iVDGL Laboratory, according to the stated goals and mandate of NSF funded iVDGL ITR Project. The iVDGL Project previously had two deployments (both in 2002): a small testbed consisting of VDT deployed on a small number of U.S. sites (iVDGL-1), followed by an a second, joint deployment with the EU DataTAG project (iVDGL-2) which coincided with the IST 2002 and SC2002 confrences (the “WorldGrid” Project). Grid3 (originally proposed iVDGL-3) is the third phase of the iVDGL Laboratory.

The Grid2003 Project deployed, integrated and operated Grid3 with 27~25 operational processing sites comprising at peak ~2800 CPUs for more than 3 weeks up to, during and after the SC2003 conference on November 16, 2003. Progress was made in oOther areas that are important to the iVDGL mission, as follows proposal themes in which progress was made:

• · Multiple VO grid: six different virtual organizations participated and successfully deployedwith 10 applications deployed and successfully run. All applications were able to run on sites that were not owned by the host organization whose application it was. Further, tThe applications were all able to run on non-dedicated resources.



• · Multi-discinplinarydisciplinary grid: during the project, two new applications, the SnB one from structural biology application and an application inthe other from chemical informatics, were run across Grid3. The fact that these could be installed and run on a Grid infrastructure designed and installed for Particle and Astrophysics Experiments provides the members of iVDGL withgives us added confidence that this gridinfrastructure is general and can be adapted to other applications as needed.



• · Use of shared resources: many of the resources brought into the Grid3 environment were leveraged facilitesfacilities in use by other VO’s.

Examples include successful incorporation sites

· Grid Operations and estabishment of the iGOC. Resources from the Indiana University-based Abilene NOC were leveraged to provide a number of operations services, including: VOMS services for iVDGL VO participants (CS, Biology, Chemistry), the MonALISA Grid3 database which served double duty for online resource display and archival storage for the Metrics Data Viewer (MDViewer) used for analysis of Grid3 metrics, the top level GIIS information service, development and support of the iVDGL:Grid3 Pacman cache, coordination, development and hosting of site status scripts and displays, creation/support of Ganglia Pacman caches and hosting of toplevel Ganglia collector and web server.

• · Dynamic resource allocation: in addition to resources that were committed 24(7, the University atof Buffalo’s, Center for Computational Research (CCR) was able to configured their local schedulers to bring additional resources in to and out of Grid3 on a daily basisnightly according to local policies, satisfying local requirements and (external) Grid3 users.

• International connectivity: one site was located abroad (Kyunpook National University, Korea).

Over the course of several weeks surrounding SC2003, the Grid2003 project met its target goals, as follows:

International connectivity: though one site was located abroad (Kyunpook National University, Korea).

· International connectivity: though one site was located abroad (Kyunpook National University, Korea), international operations were not a primary focus in contrast to last year’s WorldGrid VDT-EDG interoperability demonstration project which focused on TransAtlantic Grids.

· VDT installation and configuration. Improvements included enhanced support for post-install configurations at sites with native Condor installations. Pacman3 development was concurrent to most of the project and was not used for deployment. However, initial tests of Pacman3 with iVDGL:Grid3 have demonstrated backwards compability of the new tool.

· VDT testing and robustification. The Troubleshooting team, lead by the VDT group, oversaw a number of VDT improvements and patches in response to bugs uncovered by site administrators and application users and developers. These included, most importantly, patches required for job managers and provisions for the MDS 2.4 upgrade.

The Grid2003 project met the metrics and thefrom the planning document as listed below. The “status” numbers fluctuate over the course of several weeks around Supercomputing 2003 when the grid was in full productionC.

1. 1. Number of CPUS.: Target: 400, Status: 2163.: More than 60 % of available CPU resources are non-dedicated facilities. The Grid3 environment effectively shareds resources not directly owned by the participating experiments (ref: list of sites). Include pie-chart metrics.

2. 2. Number of Users.: Target: 10, Status: 102.: About 10% of the users are application administrators who do the majority of the job submissions. However, more than 102 users are authorized to use the resources through their respective VO’MS services.

3. 3. Number of Applications.: Target: >4, Status: 10.: Seven scientific applications, including at least one from each of the five GriPhyN-iVDGL-PPDG participating experiments, SnB structural biology, and GADU/Gnare genome analysis, were and continue to run on Grid3. In addition, three computer science demonstrators (instrumented gridftp, a multi-site I/O generator, and health monitors the grid) are run periodically. (ref: applications page)

4. 4. Number of sites running Concurrent Applications.: Target: >10 Status: 17. : This number is related to the number of Computational Service (CS,CSE) sites defined on the catalog page and varies with the application.

5. 5. Data Transfers Per Day.: Target: 2-3 TB, Status: 4 TB.: This metric was Metric met with the aid of the GridFTP-demo which runs concurently with scientific applications (ref:GridFTP statics page) (Show the data consumed/produced plots, versus time).

6. 6. Percentage of Resources Used.: Target: 90%, Status: 40-70%: The maximum number of CPUs on Grid3 exceeded 2500. On November 20, 2003 there were sustained periods when over 1100 jobs ran simultaneously (the metrics plots are averages over specific time bins, which can report less that the peak depending on chosen bin size). Each time we upgraded a component of the grid there was a significant length of time before stable operation was regained. In the latter part of the project most of the upgrades were for the monitoring systems which did not prevent app

7. lications from running.

7. Efficiency of Job Completion: Target: up to 75 %; Status: varies: This varies depending on the application, and defintion of failure. Generally speaking, for well-run Grid3 sites and stable applications this figure exceeds 90%. We have not had the time to explore why individual jobs fail. US CMS MOP… Add US ATLAS results from Y. Smirnov/Xin Zhao here.

8. 8. Peak Number of Concurrent Jobs.: Target: up to 1000; Status: 1100.: On November 20, 2003 there were sustained periods when over 1100 jobs ran simultaneously.

Achieved on 11/20/03.

9. Rate of Faults/Crashes: Target: ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download