WMS SW Admin and User Guide



DataGrid

WP1 - WMS Software Administrator and User Guide

(PM9 RELEASE)

[pic]

| |Document identifier: |DataGrid-01-TEN-0118-0_1 |

| |Date: |14/01/2002 |

| |Work package: |WP1 |

| |Partner: |Datamat SpA |

| | | |

| |Document status |DRAFT |

| | | |

| |Deliverable identifier: | |

|Abstract: This note provides the administrator and user guide for the WP1 WMS software delivered for PM9 release. |

|Delivery Slip |

| |Name |Partner |Date |Signature |

|From |Fabrizio Pacini |Datamat SpA |14/01/2002 | |

|Verified by |Stefano Beco |Datamat SpA |14/01/2002 | |

|Approved by | | | | |

|Document Log |

|Issue |Date |Comment |Author |

|0_0 |21/12/2001 |First draft |Fabrizio Pacini |

|0_1 |14/01/2002 |Draft |Fabrizio Pacini |

|Document Change Record |

|Issue |Item |Reason for Change |

|0_1 |General update |Take into account changes in the rpm generation procedure. |

| | |Add missing info about daemons (RB/JSS/CondorG) starting accounts |

| | |Some general corrections |

| | | |

|Files |

|Software Products |User files |

|Word 97 |DataGrid-01-TEN-0118-0_1-Document |

|Acrobat Exchange 4.0 |DataGrid-01-TEN-0118-0_1-Document.pdf |

Content

1. Introduction 5

1.1. Objectives of this document 5

1.2. Application area 5

1.3. Applicable documents and reference documents 5

1.4. Document evolution procedure 6

1.5. Terminology 6

2. Executive summary 8

3. Build Procedure 9

3.1. Required Software 9

3.2. Build Instructions 10

3.2.1. Environment Variables 10

3.2.2. Compiling the code 12

3.3. RPM Installation 18

4. Installation and Configuration 20

4.1. Logging and Bookkeeping services 20

4.1.1. Required software 20

4.1.2. RPM installation 21

4.1.3. The installation tree structure 22

4.1.4. Configuration 23

4.1.5. Environment Variables 23

4.2. RB and JSS 25

4.2.1. Required software 25

4.2.2. RPM installation 27

4.2.3. The Installation Tree structure 27

4.2.4. Configuration 28

4.2.5. Environment variables 32

4.3. Information Index 34

4.3.1. Required software 34

4.3.2. RPM installation 34

4.3.3. The Installation tree structure 35

4.3.4. Configuration 35

4.3.5. Environment Variables 36

4.4. User Interface 37

4.4.1. Required software 37

4.4.2. RPM installation 38

4.4.3. The tree structure 39

4.4.4. Configuration 40

4.4.5. Environment variables 41

5. Operating the System 43

5.1. LB local-logger 43

5.1.1. Starting and stopping daemons 43

5.1.2. Troubleshooting 44

5.2. LB Server 45

5.2.1. Starting and stopping daemons 45

5.2.2. Purging the LB database 45

5.2.3. Troubleshooting 46

5.3. RB and JSS 47

5.3.1. Startig PostgreSQL 47

5.3.2. Starting Condor-G 47

5.3.3. Starting and stopping RB daemons 47

5.3.4. Starting and stopping JSS daemons 48

5.3.5. RB troubleshooting 49

5.3.6. JSS troubleshooting 49

5.4. Information Index 49

5.4.1. Starting and stopping daemons 49

6. User Guide 50

6.1. User interface 50

6.1.1. Security 50

6.1.2. Common behaviours 51

6.1.3. Commands description 55

7. Annexes 87

7.1. JDL Attributes 87

7.2. Job Status Diagram 87

7.3. Job Event Types 89

7.4. wildcard patterns 91

7.5. The Match Making Algorithm 93

7.5.1. Direct Job Submission 93

7.5.2. Job submission without data-accesss requirements 93

7.5.3. Job submission with data-access requirements 95

Introduction

This document provides a guide to the building, installation and usage of the WP1 WMS software released for PM9.

1 Objectives of this document

Goal of this document is to describe the complete process by which the WP1 WMS software can be installed and configured on the DataGrid test-bed platforms.

Guidelines for operating the whole system and accessing provided functionalities are also provided.

2 Application area

Administrators can use this document as a basis for installing, configuring and operating WP1 WMS software released for PM9. Users can refer to the User Guide chapter for accessing provided services through the User Interface.

3 Applicable documents and reference documents

Applicable documents

|[A1] |Job Description Language HowTo – DataGrid-01-TEN-0102-02-Document.pdf – 17/12/2001 |

| |() |

|[A2] |DATAGRID WP1 Job Submission User Interface for PM9 (revised presentation) – 23/03/2001 |

| |() |

|[A3] |WP1 meeting - CESNET presentation in Milan – 20-21/03/2001 |

| |() |

|[A4] |Logging and Bookkeeping Service – 0705/2001 |

| |() |

|[A5] |Results of Meeting on Workload Manager Components Interaction – 09/05/2001 |

| |() |

|[A6] |Resource Broker Architecture and APIs – 13/06/2001 |

| |() |

|[A7] |JDL Attributes - DataGrid-01-NOT-0101-0_4 – 17/12/2001 |

| |() |

Reference documents

|[R1] | |

4 Document evolution procedure

The content of this document will be subjected to modification according to the following events:

• Comments received from Datagrid project members,

• Changes/evolutions/additions to the WMS components.

5 Terminology

Definitions

|Condor |Condor is a High Throughput Computing (HTC) environment that can manage very large collections of |

| |distributively owned workstations |

|Globus |The Globus Toolkit is a set of software tools and libraries aimed at the building of computational grids and |

| |grid-based applications. |

Glossary

|class-ad |Classified advertisement |

|CE |Computing Element |

|DB |Data Base |

|FQDN |Fully Qualified Domain Name |

|GDMP |Grid Data Management Pilot Project |

|GIS |Grid Information Service, aka MDS |

|GSI |Grid Security Infrastructure |

|job-ad |Class-ad describing a job |

|JDL |Job Description Language |

|JSS |Job Submission Service |

|LB |Logging and Bookkeeping Service |

|LRMS |Local Resource Management System |

|MDS |Metacomputing Directory Service, aka GIS |

|MPI |Message Passing Interface |

|PID |Process Identifier |

|PM |Project Month |

|RB |Resource Broker |

|RC |Replica Catalogue |

|SE |Storage Element |

|SI00 |Spec Int 2000 |

|SMP |Symmetric Multi Processor |

|TBC |To Be Confirmed |

|TBD |To Be Defined |

|UI |User Interface |

|UID |User Identifier |

|WMS |Workload Management System |

|WP |Work Package |

Executive summary

This document comprises the following main sections:

Section 3: Build Procedure

Outlines the software required to build the system and the actual process for building it and generating rpms for the WMS components; a step-by-step guide is included.

Section 4: Installation and Configuration

Describes changes that need to be made to the environment and the steps to be performed for installing the WMS software on the test-bed target platforms. The resulting installation tree structure is detailed for each system component.

Section 5: Operating the System

Provides actual procedures for starting/stopping WMS components processes and utilities.

Section 6: User Guide

Describes in a Unix man pages style all User Interface component commands allowing the user to access WMS provided services.

Section 7: Annexes

Deepens arguments introduced in the User Guide section that are considered useful for the user to better understand system behaviour.

Build Procedure

In the following section we give detailed instructions for the installation of the WP1 WMS software package. We provide a source code distribution as well as a binary distribution and explain installation procedures for both cases.

1 Required Software

The WP1 software runs and has been tested on platforms running Globus Toolkit 2.0 Beta Release 21 on top of Linux RedHat 6.2.

Hereafter are listed the software packages, apart from WP1 software version 1.0, that are required to be installed locally on a given site in order to be able to build the WP1 WMS on it. They are:

– Globus Toolkit 2.0 Beta 21 or higher (download at )

– Python 2.1.1 (download at )

– Swig 1.3.7 (download at )

– Expat 1.95.1 (download at )

– MySQL Version 9.38 Distribution 3.22.32, for pc-linux-gnu (i686) (download at )

– Postgresql 7.1.3 ()

– Classads library

– CondorG 6.3.1 for INTEL-LINUX-GLIBC21

– Perl IO Stty 0.02, Perl IO Tty 0.04 (download at )

– Perl 5 (download at )

– gcc and c++ compilers egcs-2.91.66 or egcs-2.95.2 (mandatory for CondorG)

– GNU make version 3.78.1 or higher

– GNU autoconf version 2.13

– GNU libtool 1.3.5

– GNU automake 1.4

– GNU m4 1.4 or higher

– RPM 3.0.5

– sendmail 8.11.6

2 Build Instructions

The following instructions deal with the building of the WMS software and hence apply to the source code distribution.

1 Environment Variables

Before starting the compilation, some environment variables related to the WMS components can be set or configured by means of the configure script. This is needed only if package defaults are not suitable. Involved variables are listed below:

- GLOBUS_LOCATION base directory of the Globus installation

The default path is /opt/globus.

- MYSQL_INSTALL_PATH base directory of the MySQL installation

The default path is /usr.

- EXPAT_INSTALL_PATH base directory of the Expat installation.

The default path is /usr.

- GDMP_INSTALL_PATH base directory of the Gdmp installation

The default path is /opt/edg.

- PGSQL_INSTALL_PATH base directory of the Pgsql installation.

The default path is /usr.

- CLASSAD_INSTALL_PATH base directory of the Classad library installation. The

default path is /opt/classads.

- CONDORG_INSTALL_PATH base directory of the Condor installation.

The default path is /opt/CondorG.

- PYTHON_INSTALL_PATH base directory of the Python installation.

The default path is /usr.

- SWIG_INSTALL_PATH base directory of the Swig installation .

The default path is /usr/local.

In order to build the whole WP1 package, all the environment variables in the previous list must be set. Instead for building the User Interface module, the environment variables that need to be set are the following:

- GLOBUS_LOCATION

- CLASSAD_INSTALL_PATH

- PYTHON_INSTALL_PATH

- SWIG_INSTALL_PATH

- EXPAT_INSTALL_PATH

If you plan to build the Job Submission and Resource Broker module, variable to set are:

- GLOBUS_LOCATION

- MYSQL_INSTALL_PATH

- EXPAT_INSTALL_PATH

- GDMP_INSTALL_PATH

- PGSQL_INSTALL_PATH

- CLASSAD_INSTALL_PATH

- CONDORG_INSTALL_PATH

Whilst the LB server and Local Logger modules, to be built need the following environment variables:

- GLOBUS_LOCATION

- MYSQL_INSTALL_PATH

- EXPAT_INSTALL_PATH

Finally, the LB library module needs:

- GLOBUS_LOCATION

- EXPAT_INSTALL_PATH

and the Information Index module only:

- GLOBUS_LOCATION

2 Compiling the code

After having unpacked the WP1 source distribution tar file, or having downloaded the code directly from the CVS repository, change your working directory to be the WP1 base directory, i.e. the Workload directory, and run the following command:

./recoursive-bootstrap

At this point the configure command can be run. The configure script has to be invoked as follows:

./configure

The list of options that are recognized by configure is reported hereafter:

--help

--prefix=

It is used to specify the Workload installation dir. The default

installation dir is /opt/edg.

--enable-all

It is used to enable the build of the whole WP1 package. By default this option is turned on.

--enable-userinterface

It is used to enable the build of the User Interface module with Logging/Client, Broker/Client, Broker/Socket++ and ThirdParty/trio/src submodules. By default this option is turned off.

--enable-jss_rb

It is used to enable the build of the Job Submission and Resource Broker modules with Logging/Client, Common, test, and ThirdParty/trio/src submodules. By default this option is turned off.

--enable-lbserver

It is used to enable the build of the LB Server service with Logging/Client, Logging/etc, Logging/Server, Logging/InterLogger/Net, Logging/InterLogger/SSL, Logging/InterLogger/Error, Logging/InterLogger/Lbserver and ThirdParty/trio/src submodules. By default this option is turned off.

--enable-locallogger

It is used to enable the build of the LB Local Logger service with Logging/Client, Logging/InterLogger/Net, Logging/InterLogger/SSL, Logging/InterLogger/Error, Logging/InterLogger/InterLogger, Logging/LocalLogger, man and ThirdParty/trio/src submodules. By default this option is turned off.

--enable-logging_dev

It is used to enable the build of the LB Client Library with Logging/Client and ThirdParty/trio/src submodules. By default this option is turned off.

--enable-information

It is used to enable the build of the Information Index module.By default this option is turned off.

--with-globus-install=

It allows specifying the Globus installation directory without setting the environment variable GLOBUS_LOCATION.

--with-pgsql-install=

It allows specifying the Pgsql installation directory without setting the environment variable PGSQL_INSTALL_PATH.

--with-gdmp-install=

It allows specifying the GDMP installation directory without setting the environment variable GDMP_INSTALL_PATH.

--with-expat-install=

It allows specifying the Expat installation directory without setting the environment variable EXPAT_INSTALL_PATH.

--with-mysql-install=

It allows to specify the MySQL installation directory without setting the environment variable MYSQL_INSTALL_PATH.

--with-expat=

It allows either to enable or to disable the Expat installation checking. The default value is 'yes'.

--with-pgsql=

It allows either to enable or to disable Pgsql installation checking. The default value is 'yes'.

--with-mysql=

It allows either to enable or to disable MySQL installation checking. The default value is 'yes'.

--with-gdmp=

It allows either to enable or to disable Gdmp installation checking. Thedefault value is 'yes'.

During the configure step, six spec files (i.e. wl-userinterface.spec, wl-locallogger.spec, wl lbserver.spec, wl-logging_dev.spec, wl-jss_rb.spec and wl-information.spec) are created respectively in the following source sub-directories to produce a flavour specific version:

- Workload/UserInterface

- Workload/Logging

- Workload/JobSubmission

- Workload/InformIndex

Once the configure script has terminated its execution, check that the make from the GNU distribution is in your path and then always in the Workload source code directory run:

make

then:

make check

to build the test code. If the two previous steps complete successfully, the installation of the software can be performed. In order to install the package in the installation directory specified either by the --prefix option of the configure script or by the default value (i.e. /opt/edg), you can now issue the command:

make install

It is possible to run "make clean" to remove object files, executable files, library files and all the other files that are created during ”make” and “make check”. The command:

make -i dist

can be used to produce in the workload-1.0.0 directory, located in the Workload's base directory, a binary gzipped tar ball of the Workload distribution. This tar ball can be both transferred on other platforms and used as source for the RPM creation.

For creating the RPMs for Workload 1.0 (according to the configure options you have used) make sure that your PATH is set in such a way that the GNU autotools, make and the gcc compiler can be used and edit the file $HOME/.rpmmacros (if this file does not exist in your home directory, then you have to create it) to set the following entry:

%_topdir /rpm/redhat

Then you can issue the command:

make rpm

that generates the RPMs in $(HOME)/rpm/redhat/RPMS.

For example if before building the package you have used the configure as follows:

./configure –-enable-all

then the make rpm command creates the directories:

$(HOME)/rpm/redhat/SOURCES

$(HOME)/rpm/redhat/SPECS

$(HOME)/rpm/redhat/BUILD

$(HOME)/rpm/redhat/RPMS

$(HOME)/rpm/redhat/SRPMS

and copies the previously created tar ball workload-1.0.0/Workload.tar.gz in $(HOME)/rpm/redhat/SOURCES. Moreover it copies the generated spec files:

JobSubmission/wl-jss_rb.spec

UserInterface/wl-userinterface.spec

InformIndex/wl-information.spec

Logging/wl-lbserver.spec

Logging/wl-locallogger.spec

Logging/wl-logging_dev.spec

in $(HOME)/rpm/redhat/SPECS and finally executes the following commands:

rpm -ba wl-userinterface.spec

rpm -ba wl-locallogger.spec

rpm -ba wl-lbserver.spec

rpm -ba wl-logging_dev.spec

rpm -ba wl-jss_rb.spec

rpm -ba wl-information.spec

generating respectively the following rpms in the $(HOME)/rpm/redhat/RPMS directory:

- userinterface-1.0.0-6.i386.rpm

- locallogger-1.0.0-5.i386.rpm

- lbserver-1.0.0-6.i386.rpm

- logging_dev-1.0.0-4.i386.rpm

- jobsubmission-1.0.0-6.i386.rpm

- informationindex-1.0.0-5.i386.rpm

If you have instead built only the User Interface, i.e. used:

./configure --disable-all --enable-userinterface --with-mysql='no'

--with-pgsql='no' --with-gdmp='no'

the make rpm command will copy only the file UserInterface/wl-userinterface.spec in $(HOME)/rpm/redhat/SPECS and will create only the User Interface rpm (userinterface-1.0.0-6.i386.rpm).

An alternative procedure can be followed to build the II and Logging packages. To do this, move in the Workoad/InformIndex dir and run the following commands:

./bootstrap

./configure [option]

where the recognized options are:

--prefix=

It is used to specify the Information Index installation dir. The default installation dir is /opt/edg

--with-globus-install=

It allows to specify the Globus install directory without setting the environment variable GLOBUS_LOCATION.

Then issue:

make

make install

Afterwards move into the Workload/Logging directory and run the following commands:

./bootstrap

./configure [option]

where the recognized options are:

--prefix=

It is used to specify the Logging installation dir. The default installation dir is /opt/edg

--with-globus-install=

It allows specifying the Globus install directory without setting the environment variable GLOBUS_LOCATION.

--with-expat-install=

It allows specifying the Expat install directory without setting the environment variable EXPAT_INSTALL_PATH

--with-mysql-install=

It allows specifying the MySQL install directory without setting the environment variable MYSQL_INSTALL_PATH.

--with-expat=

It allows either to enable or to disable Expat install checking. The default value is 'yes'.

--with-mysql=

It allows either to enable or to disable MySQL install checking. The default value is 'yes'.

Then issue:

make

make check

make install

Summarising, in relation to the WMS module you want to build, the configure script has to be run with the following options:

– all

./configure

– userinterface

./configure --disable-all --enable-userinterface \

--with-mysql='no' --with-pgsql='no' --with-gdmp='no'

– information

./configure --disable-all --enable-information

– lbserver

./configure --disable-all --enable-lbserver

– locallogger

./configure --disable-all --enable-locallogger

– logging for developers

./configure --disable-all --enable-logging_dev \

--with-mysql='no'

– jobsubmission and broker

./configure --disable-all --enable-jss_rb

3 RPM Installation

In order to install the WP1 RPMs on the target platforms, the following commands have to be executed as root:

rpm -ivh userinterface-1.0.0-6.i386.rpm

rpm -ivh informationindex-1.0.0-5.i386.rpm

rpm -ivh jobsubmission-1.0.0-6.i386.rpm

rpm -ivh locallogger-1.0.0-5.i386.rpm

rpm -ivh lbserver-1.0.0-6.i386.rpm

rpm -ivh logging_dev-1.0.0-4.i386.rpm

By default the rpm installs the software in the /opt/edg directory. If you have installed one of the following rpms:

- userinterface-1.0.0-6.i386.rpm,

- informationindex-1.0.0-5.i386.rpm

- jobsubmission-1.0.0-6.i386.rpm

you have to run the /opt/edg/etc/configure_workload script as root, which installs /opt/edg/etc/workload.sh and /opt/edg/etc/workload.csh scripts under /etc/profile.d. The two latter scripts set the EDG_LOCATION environment variable to /opt/edg and run the $EDG_LOCATION/etc/workload_{jss, ui}_env.sh scripts. The script workload_ui_env.{sh, csh} sets and updates the following environment variables:

PATH="${EDG_LOCATION}/bin:${PATH}"

LD_LIBRARY_PATH="${EDG_LOCATION}/lib:${LD_LIBRARY_PATH}"

PYTHONPATH="${EDG_LOCATION}/lib:${PYTHONPATH}"

The script workload_jss_env.{sh, csh} checks instead that on the machine there are condor_master and condor_schedd executables.

Furthermore, the start_JobSubmission and start_Broker script files in /opt/edg/utils can be run as root to start the Job Submission and Broker services. The SXXII script file in /opt/edg/utils can be run as root to start the Information Index service and finally the kill_JobSubmission and kill_Broker script files can be run as root to stop the RBserver, jssparser and jssserver processes

Details on the installation and configuration and of each of the listed rpms are provided in section 4 of this document. For further information about RPM please consult the man pages or .

Installation and Configuration

This section deals with the procedures for installing and configuring the WP1 WMS components on the target platforms. For each of them, before starting with the installation procedure which is described through step-by-step examples, is reported the list of dependencies i.e. the software required on the same machine by the component to run. Moreover a description of needed configuration items and environment variables settings is also provided.

1 Logging and Bookkeeping services

From the installation point of view LB services can be split in two main components:

• The LB services responsible for accepting messages from their sources and forwarding them to the logging and/or bookkeeping servers, which we will refer as LB local-logger services.

• The LB services responsible for accepting messages from the LB local-logger services, saving them on their permanent storage and supporting queries generated by the consumer API, that we will refer as LB server services.

The LB local-logger services must be installed on all the machines hosting processes pushing information into the LB system, i.e. the machines running RB and JSS, and the gatekeeper machine of the CE. An exception is the submitting machine (i.e. the machine running the User Interface) on which this component can be installed but is not mandatory:

The LB server services need instead to be installed only on a server machine that usually coincides with the RB server one.

1 Required software

1 LB local-logger

For the installation of the LB local-logger the only software required is the Globus Toolkit 2.0 (actually only GSI rpms are needed). Globus 2 rpms are available at under the directory beta-xx/RPMS (recommended beta is 21 or higher). All rpms can be downloaded with the command

wget -nd –r /

and installed with

rpm –ivh

2 LB Server

For the installation of the LB server the Globus Toolkit 2.0 (actually only GSI rpms are needed). Globus 2 rpms are available at under the directory beta-xx/RPMS (recommended beta is 21 or higher). All rpms can be downloaded with the command

wget -nd –r /

and installed with

rpm –ivh

Besides Globus Toolkit 2.0 for the LB server to work properly it is also necessary to install MySQL Distribution 3.22.31 or higher.

Instructions about MySQL installation can be found at the following URLs:



Packages and more general documentation can be found at:

.

Anyway the rpm of MySQL Ver 9.38 Distribution 3.22.32, for pc-linux-gnu (i686) is available at .

At least packages MySQL-3.22.32 and MySQL-client-3.32.22 have to be installed for creating and configuring the LB database.

LB server stores the logging data in a MySQL database that must hence be created. The following assumes the database and the server daemons (bkserver and ileventd) run on the same machine, which is considered to be secure, i.e. no database authentication is used. In a different set-up the procedure has to be adjusted accordingly as well as a secure database connection (via ssh tunnel etc.) established.

The action list below contains placeholders DB_NAME and USER_NAME, real values have to be substituted. They form the database connection string required on some LB daemons invocation. Suggested value for both DB_NAME and USER_NAME is `lbserver', this value is also the compiled-in default (i.e. when used, the database connection string needn't be specified at all).

The following needed steps require MySQL root privileges:

1) Create the database:

mysqladmin -u root -p create DB_NAME

where DB_NAME is the name of the database.

2) Create a dedicated LB database user:

mysql -u root -p -e 'grant create,drop,select,insert, \ update,delete on DB_NAME.* to USER_NAME@localhost'

where USER_NAME is the name of the user running the LB server daemons.

3) Create the database tables:

mysql -u USER_NAME DB_NAME < server.sql

where server.sql is a file containing sql commands for creating needed tables. server.sql can be found in the directory “/etc” created by the LB server rpm installation.

2 RPM installation

In order to install the LB local-logger and the LB server services, the following command have to be respectively issued with root privileges:

rpm -ivh [--prefix ] locallogger-1.0.0-5.i386.rpm

rpm -ivh [--prefix ] lbserver-1.0.0-6.i386.rpm

By default the rpm installs the software in the “/opt/edg” directory. Using the --prefix directive, it is possible to install the software in a different location (i.e. in the directory). Instead the --relocate = directive can be used to relocate an installation from to .

3 The installation tree structure

1 LB local-logger

When the LB local-logger RPM is installed, the following directory tree is created:

/info

/

/lib (empty dir)

/man

/man1/interlogger.1

/man3 (empty dir)

/sbin

/sbin/dglogd

/sbin/interlogger

/sbin/locallogger

The sbin directory contains all the LB local-logger daemons executables and the script locallogger to be used for starting daemons. In the man directory can be found the man page for the inter-logger daemon.

After having installed the locallogger package the administrator shall create in the directory “/etc/rc.d/init.d “ a symbolic link to /sbin/locallogger, using as root the following commands:

cd /etc/rc.d/init.d

ln –s /sbin/locallogger locallogger

2 LB Server

When the LB server RPM package is installed, the following directory tree is created:

/sbin

/lib (empty dir)

/sbin/bkpurge

/sbin/bkserver

/sbin/ileventd

/sbin/lbserver

/etc/server.sql

/share/doc

/share/doc/DataGrid-01-TEN-0118-0_0.pdf

where the sbin directory contains all the LB server daemons executables and the script lbserver to be used for starting daemons.

After having installed the lbserver package the administrator shall create in the directory “/etc/rc.d/init.d “ a symbolic link to /sbin/lbserver, using as root the following commands:

cd /etc/rc.d/init.d

ln –s /sbin/lbserver lbserver

4 Configuration

Both the LB local-logger and LB server have no configuration files so no action is needed for this task.

5 Environment Variables

All LB components need the following environment variables to be set:

– X509_USER_KEY the user private key file path

– X509_USER_CERT the user certificate file path

– X509_CERT_DIR the trusted certificate directory and ca-signing-policy directory

– X509_USER_PROXY the user proxy certificate file path

as required by GSI.

However, in case of LB daemons, the recommended way for specifying security files locations is using --cert, --key, --CAdir options explicitly.

The Logging library i.e. the library that is linked into UI, RB, JSS and Jobmanager, reads its immediate logging destination form the variable DGLOG_DEST.

It defaults to “x-dglog://localhost:15830“ which is the correct value, hence it normally does not need to be set but on the submitting machine. Correct format for this variable is:

DGLOG_DEST=x-dglog://HOST:PORT

where as already mentioned HOST defaults to localhost and PORT defaults to 15830.

On the submitting machine if the variable is not set it is dynamically assigned by the UI with the value:

DGLOG_DEST=x-dglog://:15830

where LB_CONTACT is the hostname of the machine where the LB server currently associated to the RB used for submitting jobs is running.

Finally there is LBDB, the environment variable needed by the LB Server daemons (ileventd, bkserver and bkpurge). LBDB represents the MySQL database connect-string, defaults to

“lbserver/@localhost:lbserver” and in the recommended set-up (see section 4.1.1.2) does not need to be set. Otherwise it should be set as follows:

LBDB=USER_NAME/PASSWORD@DB_HOSTNAME:DB_NAME

where

- USER_NAME is the name of database user,

- PASSWORD is user password for the database

- DB_HOSTNAME is hostname of the host where the database is located

- DB_NAME is name of the database.

2 RB and JSS

The Resource Broker and the Job Submission Services are the WMS components allowing the submission of jobs to the CEs. They are dealt with together since they always reside on the same host and consequently are distributed by means of a single rpm.

1 Required software

For the installation of RB and JSS the Globus Toolkit 2.0 rpms available at under the directory beta-xx/RPMS (recommended beta is 21 or higher) are required to be installed on the target platform. All needed rpms can be downloaded with the command

wget -nd –r /

and installed with

rpm –ivh

The Globus gridftp server package must also be installed and configured on the same host (see for details).

It is important to recall that the Globus grid-mapfile located in /etc/grid-security on the RB server machine must be filled with the certificate subjects of all the users allowed to use the Resource Broker functionalities. Moreover on the same platform the following products are expected to be installed:

– LB local-logger services (see section 4.1.1.1)

– PostgreSQL (RB and JSS)

– Condor-G (JSS)

– ClassAd library (RB and JSS)

– ReplicaCatalog from the WP2 distribution (RB)

1 PostgreSQL installation and configuration

Both RB and JSS use PostgreSQL database for implementing the internal job queue. The installation kit and the documentation for PostgreSQL can be found at the following URL:



Required PostgreSQL version is 7.1.3 or higher. The following packages need to be installed (respecting the order in which they are listed): postgresql-libs, posgresql-devel, postgresql, postgresql-server, postgresql-tcl, postgresql-tk and postgresql-docs.

PostgreSQL also needs packages cyrus-sasl-1-5-11 (or higher), openssl-0.9.5a and openssl-devel-0.9.5a (or higher). All of them can be found at the following URL:



Hereafter are reported the configuration options that must be used when installing the package:

--with-CXX

--with-tcl

--enable-odbc

Postgresql 7.1.3 is also available in rpm format (to be installed as root) at the URL :



Once PostgreSQL has been installed, you need as root to create a new system account dguser (i.e. using option –r of adduser OS function) and to follow steps reported here below to create an empty database for JSS:

su – postgres (become the postgres user)

createuser –d –A dguser (create the new database user dguser)

su – dguser (become the user dguser)

createdb (create the new database for JSS)

The name of the created database must be the same as the one assigned to the Database_name attribute in file jss.conf (see section 4.2.4.2 for more details), otherwise JSS will use as default the "template1" database. Avoiding use of the template database is anyway strongly recommended.

The RB server uses instead another database named "rb", which is created by RB itself.

2 Condor-G installation and configuration

Condor-G release required by JSS is CondorG 6.3.1 for INTEL-LINUX-GLIBC21. The Condor-G installation toolkit can be found at the following URL:

.

whilst it is available in rpm format (to be installed as root) at:



Installation and configuration are quite straightforward and for details the reader can refer to the README file included in the Condor-G package. Main steps to be performed after having unpacked the package as root are:

– become dguser (su – dguser)

– make sure the directory where you are going to install CondorG is owned by dguser

– make sure the Globus Toolkit 2.0 has been installed on the platform

– run the /opt/CondorG/setup.sh installation script

– remove the link ~dguser/.globus/certificates created by the installation script

Moreover some additional configuration steps have to be performed in the Condor configuration file pointed to by the CONDOR_CONFIG environment variable set during installation. In the $CONDOR_CONFIG file the following attributes need to be modified:

RELEASE_DIR = $(CONDORG_INSTALL_PATH)

CONDOR_ADMIN =

UID_DOMAIN = < the domain of the machine (e.g. pd.infn.it)>

FILESYSTEM_DOMAIN = < the domain of the machine (e.g. pd.infn.it)>

HOSTALLOW_WRITE = *

CRED_MIN_TIME_LEFT = 0

GLOBUSRUN = $(GLOBUS_LOCATION)/bin/globusrun

and the following entries need to be added:

SKIP_AUTHENTICATION = YES

AUTHENTICATION_METHODS = CLAIMTOBE

DISABLE_AUTH_NEGOTIATION = TRUE

GRIDMANAGER_CHECKPROXY_INTERVAL = 600

GRIDMANAGER_MINIMUM_PROXY_TIME = 180

The environment variable CONDORG_INSTALL_PATH is also set during installation and points to the path where the Condor-G package has been installed.

3 ClassAd installation and configuration

The ClassAd release required by JSS and RB is classads-0.0 (or higher). The ClassAd library documentation can be found at the following URL:

.

whilst it is available in rpm format (to be installed as root) at:



4 ReplicaCatalog installation and configuration

The ReplicaCatalog release required by RB is ReplicaCatalogue-gcc32dbg-2.0 (or higher) that is available in rpm format (to be installed as root) at:



2 RPM installation

In order to install the Resource Broker and the Job Submission services, the following command has to be issued with root privileges:

rpm -ivh [--prefix ] jobsubmission-1.0.0-6.i386.rpm

By default the rpm installs the software in the “/opt/edg” directory. Using the --prefix directive, it is possible to install the software in a different location (i.e. in the directory). Instead the --relocate = directive can be used to relocate an installation from to .

3 The Installation Tree structure

When the jobsubmission rpm has been installed, the following directory tree is created:

/bin

/bin/Rbserver

/bin/jssparser

/bin/jssserver

/etc

/etc/jss.conf

/etc/rb.conf

/etc/workload.csh

/etc/workload.sh

/etc/workload_jss_env.csh

/etc/workload_jss_env.sh

/lib (empty dir)

/sbin

/sbin/broker

/sbin/jobsubmission

The directory bin contains all the RB and JSS server process executables Rbserver, jssserver and jssparser. In etc are stored the configuration files (see below Section 4.2.4.1 and section 4.2.4.2) while sbin contains the scripts to start and stop the RB and JSS processes.

4 Configuration

Once the rpm has been installed, the RB and JSS services must be properly configured. This can be done editing the two files rb.conf and jss.conf that are stored in /etc. Actions to be performed to configure the Resource Broker and the Job Submission Service are described in the following two sections.

1 RB configuration

Configuration of the Resource Broker is accomplished editing the file “/etc/rb.conf:” to set opportunely the contained attributes. They are listed hereafter grouped according to the functionality they are related with:

– MDS_contact, MDS_port and MDS_timeout refer to the II service and respectively represent the hostname where this service is running, the port number, and the timeout in seconds when the RB queries the II. E.g.:

MDS_contact = "af.infn.it";

MDS_port = 2170;

MDS_timeout = 60;

– MDS_gris_port refers to the port to be used by RB to contact GRIS’es. E.g.:

MDS_gris_port = 2135;

– MDS_multi_attributes define the list of the attribute that in the MDS are multi-valued (i.e. that this can assume multiple values). It is recommended to not modify the default value for this parameter which is currently:

MDS_multi_attributes = {

"AuthorizedUser",

"RunTimeEnvironment",

"CloseCE"

};

– MDS_basedn defines the basedn, which represents the distinguished name (DN) to use as a starting place for searches in the information index. It is recommended to not modify the default value for this parameter which is currently set to:

MDS_basedn = "o=Grid"

– LB_CONTACT and LB_PORT refer to the LB Server service and represent respectively the hostname and port where the LB server is listening for connections. E.g.:

LB_contact = "af.infn.it";

LB_port = 7846;

The Logging library i.e. the library providing APIs for logging job events to the LB (that is linked into RB) reads its immediate logging destination form the environment variable DGLOG_DEST (see section 4.1.5) hence it is not dealt with in the configuration file. DGLOG_DEST defaults to “x-dglog://localhost:15830“ which is the correct value, hence it normally does not need to be set indicating that the LB local-logger services should normally run on the sa,e host as the RB server..

– JSS_contact and JSS_server_port refer to the JSS and represent respectively the hostname (it must be the same host of the RB server one) and the port number (it must match with the RB_client_port parameter in the jss.conf file - see section 4.2.4.2) where the JSS server is listening. Moreover JSS_client_port represents the port used by RB to listen for JSS communications. Value of the latter parameter must match with the JSS_server_port parameter in the jss.conf file (see section 4.2.4.2). Hereafter is reported an example for these parameters:

JSS_contact = "af.infn.it";

JSS_client_port = 8881;

JSS_server_port = 9991;

– JSS_backlog and UI_backlog define the maximum number of simultaneous connections from JSS and UI supported by the socket . Default values are:

JSS_backlog = 5;

UI_backlog = 5;

– UI_server port is the port used by the RB server to listen for requests coming from the User Interface. Default value for this parameter is:

UI_server_port = 7771;

– RB_pool_size represents the maximum number of request managed simultaneously by the RB server. Default value for this parameter is:

RB_pool_size = 16;

– RB_purge_threshold that defines the threshold age in seconds for RBRegistry information. Indeed RB purges all the information and frees storage space of a job (input/output sandboxes) when the last update of the internal information database has taken place since more than RB_purge_threshold seconds. Default value for this parameter is about one week:

RB_purge_threshold = 600000;

– RB_cleanup_threshold represents the span of time (expressed in seconds) between two consecutive cleanups of job registry. During the registry cleanup the RB removes all the entries of those jobs classified as ABORTED. At the end of the cleanup if it is needed (see RB_purge_trheshold) the purging of the registry is performed, as well. The default value for this configuration parameter is:

RB_cleanup_threshold = 3600;

Finally, there is:

– RB_sandbox_path, which represents the pathname of the root sandboxes directory i.e. the complete pathname linking to the directory where the RB creates both input/output sandboxes directories and stores the .Brokerinfo file. Default value for this parameter is the temporary directory:

RB_sandbox_path = "/tmp"

The administrator according to the estimated amount of jobs input/sandbox files in the given period must anyway tailor this value in order to not overfull RB machine disk space.

No semicolon has to be put at the end of last field in the rb.conf file.

2 JSS configuration

Configuration of the Job Submission Service is accomplished editing the file “/etc/jss.conf:” to set opportunely the contained parameters. They are listed hereafter together wit their meanings:

– Condor_submit_file_prefix defines the prefix for the CondorG submission file. The job identifier dg_jobId is then appended to this prefix to build the actual submission file name). Default value for this parameter is:

Condor_submit_file_prefix = "/var/tmp/CondorG.sub";

– Condor_log_file defines the absolute path name of the CondorG log file, i.e. the file where the events for the submitted jobs are recorded. Default value for this parameter is:

Condor_log_file = "/var/tmp/CondorG.log";

– Condor_stdoe_dir defines the directory where the standard output and standard error files of CondorG are temporarily saved. Default value is:

Condor_stdoe_dir = "/var/tmp";

– Job_wrapper_file_prefix is the prefix for the Job Wrapper file name (i.e. the script wrapping the actual job which is submitted on the CE). As before the job identifier dg_jobId is appended to this prefix to build the actual file name. Default value for this parameter is:

Job_wrapper_file_prefix = "/var/tmp/Job_wrapper.sh";

– Database_name is the name of the Postgres database where JSS registers information about submitted jobs. This name must correspond to an existing database (how to create it is briefly described in section 4.2.1.1). Default value for the database name is the one of the database automatically created when installing Postgres, i.e.:

Database_name = "template1";

– Database_table_name is the name of the table in the previous database. This table is created by the JSS itself if not found. Default value for this parameter is:

Database_table_name = "condor_submit";

– JSS_server_port and RB_client_port represent respectively the port used by JSS to listen for RB communication and to communicate to the RB server (e.g. for sending notifications). The two mentioned parameters have to match respectively with the JSS_client_port and JSS_server_port parameters in the rb.conf file (see section 4.2.4.1). Default values are:

JSS_server_port = 8881;

RB_client_port = 9991;

– Condor_log_file_size indicates the size in bytes at which the CondorG.log log file has to be splitted. Default value is:

Condor_log_file_size = 64000;

5 Environment variables

1 RB

Environment variables that have to be set for the RB are listed hereafter:

– PGSQL_INSTALL_PATH the Postgres database installation path. Default value is

/usr/local/pgsql

– PGDATA the path where are stored the Postgres database data

Files. Default value is /usr/local/pgsql/data

– GDMP_INSTALL_PATH the gdmp installation path. Default value is /opt/edg.

Setting of PGSQL_INSTALL_PATH and PGDATA is only needed if installation is not performed from rpm. Moreover $GDMP_INSTALL_PATH/lib has to be added to LD_LIBRARY_PATH. Finally, there are other environment variables needed at run-time by RB. They are:

– EDG_WL_RB_CONFIG_DIR the RB configuration directory

– X509_HOST_CERT the user certificate file path

– X509_HOST_KEY the user private key file path

– X509_USER_PROXY the user proxy certificate file path

– GRIDMAP location of the Globus grid-mapfile that translates X509 certificate subjects into local Unix usernames. The default is /etc/grid-security/grid-mapfile.

Anyway, all variable in the latter group are set by the start_Broker script located in /utils.

2 JSS

Environment variables that have to be set for the JSS are listed hereafter:

– PGSQL_INSTALL_PATH the Postgres database installation path. Default value is

/usr/local/pgsql

– PGDATA the path where are stored the Postgres database data

Files. Default value is /usr/local/pgsql/data

– CONDOR_CONFIG The CondorG configuration file path. Default value is

/usr/local/CondorG/etc/condor_config

– CONDORG_INSTALL_PATH the CondorG installation path. Default value is

/usr/local/CondorG

Setting of PGSQL_INSTALL_PATH and PGDATA is only needed if installation is not performed from rpm. Moreover:

– $CONDORG_INSTALL_PATH/bin

– $CONDORG_INSTALL_PATH/sbin

– $PGSQL_INSTALL_PATH/bin (only if installation is not performed from rpm)

must be included in the PATH environment variable and

– $CONDORG_INSTALL_PATH/lib,

– $PGSQL_INSTALL_PATH/lib (only if installation is not performed from rpm)

have to be added to LD_LIBRARY_PATH. Finally, there are other environment variables needed at run-time by JSS. They are:

– EDG_WL_JSS_CONFIG_DIR the JSS configuration directory

– X509_HOST_CERT the user certificate file path

– X509_HOST_KEY the user private key file path

– X509_USER_PROXY the user proxy certificate file path

– GRIDMAP location of the Globus grid-mapfile that translates X509 certificate subjects into local Unix usernames. The default is /etc/grid-security/grid-mapfile.

Anyway all variables in the latter group are set into the start_JobSubmission script located in /utils.

3 Information Index

The Information Index (II) is the service queried by the Resource Broker to get information about resources for the submitted jobs during the matchmaking process. An II must hence be deployed for each RB/JSS instance.

This section describes steps to be performed to install and configure the Information Index service.

1 Required software

For installing the II, apart from the informationindex rpm (see section 4.3.2 for details), the following Globus Toolkit 2.0 rpms are needed:

– globus_libtool-gcc32dbg_rtl-1.4.i386.rpm

– globus_openldap-gcc32dbg_pgm-2.0.14.i386.rpm

– globus_openldap-gcc32dbg_rtl-2.0.14.i386.rpm

– globus_gss_assist-gcc32dbg_rtl-2.0.i386.rpm

– globus_openldap-gcc32dbgpthr_rtl-2.0.14.i386.rpm

– globus_openssl-gcc32dbg_rtl-0.9.6b.i386.rpm

– globus_cyrus_sasl-gcc32dbgpthr_rtl-1.5.27.i386.rpm

– globus_ssl_utils-gcc32dbg_rtl-2.1.i386.rpm

– globus_openssl-gcc32dbgpthr_rtl-0.9.6b.i386.rpm

– globus_mds_back_giis-gcc32dbg_pgm-0.2.i386.rpm

– globus_libtool-gcc32dbgpthr_rtl-1.4.i386.rpm

– globus_cyrus_sasl-gcc32dbg_rtl-1.5.27.i386.rpm

– globus_gssapi_gsi-gcc32dbg_rtl-2.0.i386.rpm

The above listed rpms are available at under the directory beta-xx/RPMS (recommended beta is 21 or higher). All the needed packages can be downloaded with the command

wget -nd –r /

and installed with

rpm –ivh

2 RPM installation

In order to install the Information Index service, the following command has to be issued with root privileges:

rpm -ivh [--prefix ] informationindex.1.0.0-5.i386.rpm

By default the rpm installs the software in the “/opt/edg” directory. Using the --prefix directive, it is possible to install the software in a different location (i.e. in the directory). Instead the --relocate = directive can be used to relocate an installation from to .

3 The Installation tree structure

When the informationindex rpm has been installed, the following directory tree is created:

/etc

/etc/configure_workload

/etc/grid-info-site-giis.conf

/etc/grid-info-slapd-giis.conf

/etc/workload.csh

/etc/workload.sh

/schema

/schema/core.schema

/schema/grid.ce.schema

/schema/grid.globusversion.schema

/schema/grid.gramscheduler.schema

/schema/host.schema

/schema/grid.se.schema

/schema/my.mon.schema

/utils

/utils/SXXII

/var (empty dir)

In schema are located the schema files, utils contains the init.d-style startup script SXXII, in etc are stored the configuration files and var (initially empty) is used by the II to store files created at start-up, containing args and pid of the II process.

4 Configuration

The II has two configuration files that are located in /etc and are named:

– grid-info-slapd-giis.conf

– grid-info-site-giis.conf

In grid-info-slapd-giis.conf are specified the schema file locations and the database type, whilst in grid-info-site-giis.conf are listed the entries for the GRISes that are registered to this II. Each entry has the following format:

dn: service=register, dc=mi, dc=infn, dc=it, o=grid

objectclass: GlobusTop

objectclass: GlobusDaemon

objectclass: GlobusService

objectclass: GlobusServiceMDSResource

Mds-Service-type: ldap

Mds-Service-hn: bbq.mi.infn.it

Mds-Service-port: 2135

Mds-Service-Ldap-sizelimit: 20

Mds-Service-Ldap-ttl: 200

Mds-Service-Ldap-cachettl: 50

Mds-Service-Ldap-timeout: 30

Mds-Service-Ldap-suffix: o=grid

The field Mds-Service-hn specifies the GRIS address; the Mds-Service-port specifies the GRIS port (2135 is strongly recommended) whilst the other entries are related to ldap sizelimit and ldap ttl. To add a new GRIS to the given II, it suffices to add a new entry like the one just showed, to the grid-info-site-giis.conf file.

Another file that can be used to configure the II is the start-up script /utils/SXXII. In this file is indeed specified the number of the port that is used by the II to listen for requests whose default is 2170. This value can be changed to make II listen on another port provided it matches with the value of the MDS_port attribute in the RB configuration file rb.conf (see section 4.2.4.1).

5 Environment Variables

The only environment variable needed by the II to run is the Globus installation path GLOBUS_LOCATION that is anyway set by the start-up script SXXII.

4 User Interface

This section describes the steps needed to install and configure the User Interface, which is the software module of the WMS allowing the user to access main services made available by the components of the scheduling sub-layer.

1 Required software

For installing the UI, apart from the userinterface rpm (see section 4.4.2 for details), the following Globus Toolkit 2.0 rpms available at are needed:

– globus_gss_assist-gcc32dbgpthr_rtl-2.0-21

– globus_gssapi_gsi-gcc32dbgpthr_rtl-2.0-21

– globus_ssl_utils-gcc32dbgpthr_rtl-2.1-21

– globus_gass_transfer-gcc32dbg_rtl-2.0-21

– globus_openssl-gcc32dbgpthr_rtl-0.9.6b-21

– globus_ftp_control-gcc32dbg_rtl-1.0-21

– globus_user_env-noflavor_data-2.1-21

– globus_gss_assist-gcc32dbg_rtl-2.0-21

– globus_gssapi_gsi-gcc32dbg_rtl-2.0-21

– globus_ftp_client-gcc32dbg_rtl-1.1-21

– globus_ssl_utils-gcc32dbg_rtl-2.1-21

– globus_ssl_utils-gcc32dbg_pgm-2.1-21

– globus_gass_copy-gcc32dbg_rtl-2.0-21

– globus_gsincftp-gcc32dbg_pgm-0.1-21

– globus_openssl-gcc32dbg_rtl-0.9.6b-21

– globus_common-gcc32dbg_rtl-2.0-21

– globus_profile-edgconfig-0.9-1

– globus_io-gcc32dbg_rtl-2.0-21

– globus_core-edgconfig-0.6-2

– obj-globus-1.0-4.edg

– globus_cyrus_sasl-gcc32dbgpthr_rtl-1.5.27-21

– globus_libtool-gcc32dbgpthr_rtl-1.4-21

– globus_mds_common-gcc32dbg_pgm-2.2-21

– globus_openldap-gcc32dbg_pgm-2.0.14-21

– globus_openldap-gcc32dbgpthr_rtl-2.0.14-21

– globus_core-gcc32dbg_pgm-2.1-21

Moreover the Python interpreter, version 2.1.1 has to be installed on the submitting machine (this package can be found at ). The rpm for this package is available at as:

python-2.1.1-3.i386.rpm

All the needed packages can be downloaded with the command

wget -nd –r /

and installed with

rpm –ivh

2 RPM installation

In order to install the User Interface, the following command has to be issued with root privileges:

rpm -ivh [--prefix ] userinterface-1.0.0-6.i386.rpm

By default the rpm installs the software in the “/opt/edg” directory. Using the --prefix directive, it is possible to install the software in a different location (i.e. in the directory). Instead the --relocate = directive can be used to relocate an installation from to .

1 The tree structure

After the userinterface rpm has been installed, the following directory tree is created:

/bin

/bin/JobAdv.py

/bin/JobAdv.pyc

/bin/UIchecks.py

/bin/UIchecks.pyc

/bin/UIutils.py

/bin/UIutils.pyc

/bin/dg-job-cancel

/bin/dg-job-get-logging-info

/bin/dg-job-get-output

/bin/dg-job-id-info

/bin/dg-job-list-match

/bin/dg-job-status

/bin/dg-job-submit

/bin/libRBapi.py

/bin/libRBapi.pyc

/etc

/etc/UI_ConfigENV.cfg

/etc/UI_Errors.cfg

/etc/UI_Help.cfg

/etc/configure_workload

/etc/job_template.tpl

/etc/workload.csh

/etc/workload.sh

/etc/workload_us_env.csh

/etc/workload_ui_env.sh

/lib

/lib/libLBapi.a

/lib/libLBapi.la

/lib/libLBapi.so

/lib/libLBapi.so.0

/lib/libLBapi.so.0.0.0

/lib/libLOGapi.a

/lib/libLOGapi.la

/lib/libLOGapi.so

/lib/libLOGapi.so.0

/lib/libLOGapi.so.0.0.0

/lib/libRBapic.a

/lib/libRBapic.la

/lib/libRBapic.so

/lib/libRBapic.so.0

/lib/libRBapic.so.0.0.0

/utils

/utils/set_python

The bin directory contains all UI python scripts including the commands made available to the user. In lib are installed all the API wrappers shared libraries, while in etc can be found the errors and configuration files UI_ConfigENV.cfg and UI_Errors.cfg plus the help file (UI_Help.cfg) and a template of a job description in JDL (job_template.tpl). Finally there is the /utils/set_python script that can be sourced to set the environment variable PYTHONPATH representing the libraries search path for python.

2 Configuration

Configuration of the User Interface is accomplished editing the file “/etc/UI_ConfigENV.cfg:” to set opportunely the contained parameters. They are listed hereafter together wit their meanings:

– DEFAULT_STORAGE_AREA_IN defines the path of the directory where files coming from RB (i.e. the jobs Output Sandbox files) are stored if not specified by the user through commands options. Default value for this parameter is:

DEFAULT_STORAGE_AREA_IN = /tmp

– requirements, rank and RetryCount represent the values that are assigned by the UI to the corresponding job attributes (mandatory attributes) if these have not been provided by the user in the JDL file describing the job. Default values are:

requirements = TRUE

rank = - other.EstimatedTraversalTime

RetryCount = 3

– ErrorStorage represent the path of the location where the UI creates log files. Default location is:

ErrorStorage = /tmp

– RetryCountLB and RetryCountJobId are the number of UI retrials on fatal errors respectively when opening connection with an LB and when querying the LB for information about a given job. Default values for these parameters are:

RetryCountLB = 1

RetryCountJobId = 1

Moreover there are two sections reserved to the addresses of the LBs and RBs that are accessible for the UI from the machine where it is installed. Special markers (e.g. %%beginLB%%) that must not be modified, indicate the sections begin-end. Hereafter is reported an example of the two mentioned sections:

%%beginLB%%







%%endLB%%

%%beginRB%%

af.infn.it:7771

af.infn.it:7771

%%endRB%%

LB addresses must be in the format:

[://]:

where if not provided, default for is “https”, whilst RB addresses must be in the format:

:

3 Environment variables

Environment variables that have to be set for the User Interface are listed hereafter:

– X509_USER_KEY the user private key file path. Default value is

$HOME/.globus/userkey.pem

– X509_USER_CERT the user certificate file path.Default value is

$HOME/.globus/usercert.pem

– X509_CERT_DIR the trusted certificate directory and ca-signing-policy

directory. Default value is /etc/grid-security/certificates

– X509_USER_PROXY the user proxy certificate file path. Default value is

/tmp/x509up_u where UID is the user identifier on the machine

as required by GSI. Moreover there are:

– PYTHONPATH Python modules import search path. It has to be

set to ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download