Cape Gateway - System Administration



[pic] [pic]

Cape Gateway – System Administration

April 2004

Version History

|Date |Version |Author |Description of Change |

|22 April 2004 |0.1 |Johan van Zijl |Initial document for comment. |

|May 2004 |1 |Kashif Samodien |QA final version of document |

| | | | |

| | | | |

| | | | |

Approval

|Date |Function |Required for Class |Approved by |Title |

|May 2004 | | |Kashif Samodien |Project Manager |

| | | | |WCPG Project Manager |

| | | | | |

| | | | | |

| | | | | |

Detailed contents

1. Overview 4

1.1. Minimum hardware requirements 4

1.2. Software required for installation 4

2. System installation 4

2.1. Installing Apache 4

2.2. Installing PHP 5

2.3. Installing MySQL 5

2.4. Installing expat/aspell 5

2.5. Installing the application 5

3. Backup and recovery 6

3.1. System backups 6

3.2. System recovery 7

4. Day to day administration 7

5. Resources 7

Overview

This Cape Gate development is a web based portal which allows the average South African access to a myriad of government information allowing the every citizen, to view the procedures, documents and other information generated by the Western Cape Government freely and in near real-time.

1 Minimum hardware requirements

➢ 1Gz Dual processor Intel, XEON processors

➢ 1 Gig memory

➢ 4 Disk SCSI array(RAID)

➢ 292 Gig of hard-drive space

➢ Dual Ethernet 100-1000

2 Software required for installation

➢ Apache 2.0 source

➢ PHP 4.3.x source

➢ Redhat linux ES , SUSE or Fudura

➢ MySQL 4.x source

➢ aspell 0.50 source

➢ expat XML parser

➢ gcc 3.2.x

➢ g++ 3.2.x

➢ portal source and static data scripts

➢ soap

System installation

Complete a clean install from the Linux CD’s using a minimal or server model, make sure that all the non-essential components are removed before installing as server install need to be kept as clean as possible to reduce the risk of introducing software with known security flaws into the system.

For more info on installing Linux see there online documentation sites, for detailed explanations with a screen by screen walk through.

One the install has been completed extract the above mentioned sources into /usr/local/install, we will be using this as our staging area for the actual configuration and compilation of the above components.

1 Installing Apache

First extract the Apache 2, change into the source directory, now configure apache with

–-enable-rewrite –-enable-redirect and –-enable-so which will allow us to compile and deploy dynamically binding apache modules such as php, mod-perl etc.

Once configured simply type make and once that completes make install, the default install of apache is in /usr/local/apache2 this is perfect as the average linux machine running apache simply stick apache in /etc/httpd which is just annoying and cumbersome.

When the install completes we need to create a startup script for starting and stopping the server on system startup, as I am an old sys-v jockey I stick to the primary startup script being in /etc/init.d/ and then linking the scrip to the necessary run-levels with in the startup the most important is rc.3 and rc.6 if you cover them you will almost always be safe you may also wish to take the easy way out and add it to /etc/initrc script which drives the main init process but I do recommend doing it the proper way just to make future maintenance easier.

We have now installed the apache web server and can start it up by either using the /usr/local/apache2/bin/apachectl script or /etc/init.d/ or /usr/local/apache2/bin/httpd.

2 Installing PHP

Now extract the PHP-4.3.x source and again change into the resulting directory, to be able to integrate PHP into apache as a dynamic module we must tell the PHP build script where to find the apache APXS tool, this normal refers to the /usr/local/apache2/bin/apxs, if this doesn’t exist you must rebuild apache with the –-enable-so option.

Use the configure command with the –-with-apxs=/usr/local/apache2/bin/apxs -–with- gettext –with-pspell –with-xml –-prefix=/usr/local, these options are required as they are used by various components in the source code to provide critical functionality.

You might have to build expat if your distro didn’t provide it on the install by default, most distro’s do as XML has become a core configuration format even in the UNIX world.

Once the build configuration succeed you may again type make; make install the –prefix option tells the PHP build engine where to install the software.

The build system will automatically build the dynamic PHP module and install into the apache folders, it will even try to automatically configure the server for you, although we will replace the configuration with our own.

3 Installing MySQL

Again we extract the source code and change into that directory, mysql is the simplest to configure and make as that is all that needs to be done, the default install location of mysql is /usr/local/mysql which is just fine.

With mysql you will find a mysql.server script in the third-party or support folders, simply copy this script to /etc/init.d/ and link to the desired run-levels. Check my note on apache if you’re in doubt.

4 Installing expat/aspell

These components only need to be compiled if they where not in the actual install by default again most distro’s include these component but some such as RH ES installs a broken version of aspell, and will require that you remove it with a rpm –e apspell-x.x.x, if you are unsure of the version then simply type rpm –qa | grep aspell

This will return the installed package name and can be copied and pasted back into the original command

5 Installing the application

Extract the source code for simply check it out from the development server, it needs to live in /var/www/html/site and this directory need to be soft-linked, to .za in the /var/www/html.

The server code is installed into /var/www/server your /var file system tends to be your scratch file system which allows us to place our application there as apache is simply going to execute that code.

Also make sure that your install the php.ini in the source tree into the /usr/local/lib folder, and lastly use the static sql scripts to create and install the database required by the portal.

As everything is now installed and ready to run I would suggest you restart the server and make sure everything comes up correctly. The portal will still be a little empty as we first need to do a data download from BEE before the portal will have anything to display.

To accomplish this use the cron.script in the source tree and copy and paste it into a terminal where you have typed crontab –e

Simply save the file and exit the schedule is not installed and will start the download in one minute.

Backup and recovery

1 System backups

To guarantee the availability of the live portal regular backup of both the data and actual portal scripts need to be done on a regular bases.

I would suggest for a single machine environment that the following directories get backed up every single day, using a rotating cycle starting on Sunday,

➢ /usr/local/apache2/conf/

➢ /usr/local/apache2/logs/

➢ /var/www/html

➢ /var/www/server

➢ a dump of mysql created using the mysql_dump command please make sure that you use the create inserts option as else you might have to sit for nights trying to copy and past data between terminals.

➢ Just for good measure do /usr/local/mysql/data as well you can recover from here if need be.

I would suggest we utilize 8 tapes, one for each day of the week and a final tape to cycle out to offsite storage once a month.

As recover in almost all cases required a clean install of the base software it is recommended that a separate pre-installed machine is mirrored in place on a daily bases to allow for hot swap over if and when required as we are using Intel machine they may tend to be a little less reliable than the more commercial Compaq or Dell machines.

A suggested cycle for backup might be as follows, create a single backup for each day of the week which independently goes on a tape, once a week do a full system backup either Sunday or Monday depending on when your cycle starts. The full backup is made on a separate tape and stored off-site for emergencies only.

Using the above procedure will reduce the risk of a total non-recoverable system to a bare minimum. I would still suggest that a secondary machine be put in place to ensure up-time during any unforeseen system failures.

2 System recovery

Recovering the system in a single hosted environment will mean downtime on the primary site and is not ideal. For this type of recovery simply install all the components as mentioned in the install section but do not install the code or start the download, when you reach the final step simply restore the tapes back to the file system then add the cron entries, the system will start a feed from the last backup date found in the backed up lastUpdated.txt. file.

In a dual host environment cut over to the secondary then do a completely clean installation with data feed from the content management system. I prefer this type of recovery as this will make sure the sever is correctly installed and configured.

Can it really be this each well yes it can? Backup and recover need not be a painful event it simply need to be done, regularly and obviously even in a dual hosted environment it is recommended that the same backups are in-place just incase Mr. Murphy strikes both the machines.

Day to day administration

The server and software is fairly self contained and as such do not require daily administration, this said it is important to monitor disk and other resource utilization from time to time as the server is really heavy on actual database storage is important to plan new storage requirement ahead of time, and this can only be done if the storage is monitored regularly I would suggest you using df –k or one of the like commands to extract the disk utilization and log it to an external database to make sure we capture low disk space issues early, log file can also get quite large as we use awstats to generate statistics from the site and with this we use a verbose log format.

Running a regular clean up process on the Zend cache is also recommended it normally is cronned by the Zend installation I would suggest that you increase the frequency of this specific job to make the cache more responsive.

Do monitor one of the Linux security website to keep abreast of the latest UNIX vulnerabilities and apply the recommended patches as soon as possible nothing is more dangerous than an un patched Linux system.

I prefer to get it from the source

Resources

The following resource might be of use in the day to day administration of the system.

WWW:

Introduction to BASH scripting

Advance BASH scripting

Red Hat online documentation

Books:

UNIX Backup & Recovery

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download