Proper Care and Feeding of your MDB - Broadcom Inc.



Proper Care and Feeding of your MDB

Recommendations for General MDB Maintenance

Overview

A well thought-out MDB installation is only the beginning. There are regular maintenance, optimization, and tuning tasks that must be performed. Such tasks are discussed later in this document.

This document also shows how to relocate Ingres files – checkpoint, data, dump, journal, and work - should you ever need to relocate them.

It also lists important commands (see Ingres Command Reference Guide for complete details) and discusses troubleshooting as well as backup and recovery.

Things you should do

Ingres must be installed on a disk with enough room available (a minimum of 100GB free space). If possible, create Ingres Transaction Log(s) on a separate disk(s) as that contributes to overall performance.

Checkpoint, dump, and journal files must be on a separate disk(s). Otherwise, you will not be able to recover from disasters such as hardware failure.

Maintain the MDB often by running optimizedb, usermod, and ckpdb.

Although not as often as the MDB, System Tables also need maintained which can be accomplished by running sysmod (requires exclusive access to the MDB).

Checkpoint (i.e. ckpdb) the MDB at the end of a housekeeping cycle (i.e. optimizedb and usermod).

Use appropriate MDB settings – MEDIUM and LARGE – supplied out of the box.

Logging Considerations

Ingres allows both the primary and dual transaction logs (optional) files to exist on multiple devices (up to 16). Ingres improves logging system performance by allowing it to write to multiple disk devices.

When configuring multiple log partitions, keep the following in mind:

All log partitions must be the same size

The primary and dual transaction logs must use the same number of partitions.

Minimum recommended transaction log size is 3GB.

Supplied MDB Settings – MEDIUM

MEDIUM values are the default MDB settings. Supports Windows XP Pro – 4GB Virtual Memory required, and greatly improves Ingres performance by tuning key parameters – an extensive set of tests has been run against the MDB using values below:

dbms.*.connect_limit 500 gcc.*.inbound_limit 500

dbms.*.qef_hash_mem 49152000 gcc.*.outbound_limit 500

dbms.*.qef_sort_mem 8192000 rcp.log.archiver_interval 1

dbms.*.qef_sorthash_memory 675840000 ingstart.*.rmcmd 1

dbms.private.*.p8k.dmf_cache_size 15000 gcn.session_limit 64

dbms.private.*.p8k.dmf_group_count 4500 rcp.lock.list_limit 2048

dbms.private.*.p16k.dmf_cache_size 3000 rcp.lock.per_tx_limit 3000

dbms.private.*.p16k.dmf_group_count 0 rcp.log.database_limit 10

dbms.private.*.p32k.dmf_cache_size 2500 rcp.log.tx_limit 100

dbms.private.*.p32k.dmf_group_count 0 rcp.dmf_cache_size8k 200

dbms.*.log_writer 10 dbms.*.qef_qep_mem 25600

dbms.*.max_tuple_length 0 dbms.*.rep_qman_threads 0

dbms.*.default_page_size 8192 dbms.*.system_maxlocks 500

dbms.*.active_limit 256 dbms.*.opf_memory 36700160

dbms.*.system_readlock shared

Supplied MDB Settings – Large

Large values are also supplied - setupmdb -II_MDB_SIZE=LARGE. These settings take full advantage of Server class hardware and improve performance even further – an extensive set of tests has been run against the MDB using values below:

dbms.*.connect_limit 1000

rcp.lock.per_tx_limit 3000 dbms.*.log_writer 10

dbms.*.qef_hash_mem 65536000 dbms.*.max_tuple_length 0

dbms.*.qef_sort_mem 8192000 dbms.*.default_page_size 8192

dbms.*.qef_sorthash_memory 768000000 dbms.*.active_limit 1000

dbms.private.*.dmf_cache_size 15000 dbms.*.opf_memory 36700160

dbms.private.*.dmf_group_count 1500 gcc.*.inbound_limit 500

dbms.private.*.dmf_memory 55296000 gcc.*.outbound_limit 500

dbms.private.*.p16k.dmf_cache_size 5500 rcp.log.archiver_interval 1

dbms.private.*.p16k.dmf_group_count 0 ingstart.*.rmcmd 1

dbms.private.*.p16k.dmf_memory 90112000 gcn.session_limit 64

dbms.private.*.p32k.dmf_cache_size 5500 rcp.lock.list_limit 2048

dbms.private.*.p32k.dmf_group_count 0 rcp.log.database_limit 10

dbms.private.*.p32k.dmf_memory 180224000 rcp.log.tx_limit 100

dbms.private.*.p4k.dmf_cache_size 15000 rcp.dmf_cache_size8k 200

dbms.private.*.p4k.dmf_group_count 1000 dbms.*.qef_qep_mem 25600

dbms.private.*.p4k.dmf_memory 94208000 dbms.*.rep_qman_threads 0

dbms.private.*.p8k.dmf_cache_size 25000 dbms.*.system_maxlocks 500

dbms.private.*.p8k.dmf_group_count 5500 dbms.*.system_readlock shared

dbms.private.*.p8k.dmf_memory 565248000

MDB Settings – Key Changes

Ingres default settings are very conservative. As highlighted above, we fine tuned several settings in order to get the best performance possible. Here is a brief description of the key changes:

1. dbms.*.connect_limit – specifies the number of connections which will be supported. We increased this value in order to support components like World View.

2. dbms.*.active_limit – specifies the number of active connections which will be supported by DBMS servers created with this definition. We increased this value in order to support components like World View.

3. dbms.*.qef_hash_mem – sets the maximum amount of memory that one session can use for performing hash operations in the Query Execution Facility (QEF). The total QEF hash memory limit server-wide is controlled by the related qef_sorthash_memory resource. We increased this value in order to accommodate as many hash operations as possible.

4. dbms.*.qef_sort_mem – controls the maximum amount of memory which can be allocated to any one session by the QEF for in-memory sorting. Any sort that does not fit within qef_sort_mem will be executed by the more general (but usually slower) disk-sorting facility.

5. dbms.*.qef_sorthash_memory – derived parameter that controls the maximum size of the QEF sorting and hashing memory pool. This resource has to be large enough to allow for all concurrently executing queries that do hashing operations or in-memory sorting.

6. dbms.*.opf_memory – derived parameter that specifies the maximum amount of memory which can be allocated by the Optimizer Facility (OPF) within DBMS servers created with this definition.

7. dbms.private.*.dmf_cache_size – number of individual buffers (single data pages) in the buffer manager for 2K buffer caches. Since most tables use 8K buffer, we increased this value for large settings only.

8. dbms.private.*.dmf_group_count – number of group buffers (readahead buffers) in the buffer manager. Group buffers are used for table scans and other situations where the buffer manager expects I/O to be largely sequential. Since most tables use 8K buffer, we increased this value for large settings only.

9. dbms.private.*.dmf_memory - size in bytes of the buffer cache. This resource is a computed resource, for display purposes.

10. dbms.private.*.p16k.dmf_cache_size – number of individual buffers (single data pages) in the buffer manager for 16K buffer caches.

11. dbms.private.*.p16k.dmf_group_count – number of group buffers (readahead buffers) in the buffer manager. Group buffers are used for table scans and other situations where the buffer manager expects I/O to be largely sequential. Since most tables use 8K buffer, we set this value to 0 so that it is possible to increase give 8K buffer caches as much as possible. Should I/O be largely sequential for 16K buffer caches, individual buffers will be used.

12. dbms.private.*.p16k.dmf_memory – size in bytes of the buffer cache. This resource is a computed resource, for display purposes.

13. dbms.private.*.p32k.dmf_cache_size – number of individual buffers (single data pages) in the buffer manager for 32K buffer caches. Since most tables use 8K buffer, we increased this value for large settings only.

14. dbms.private.*.p32k.dmf_group_count – number of group buffers (readahead buffers) in the buffer manager. Group buffers are used for table scans and other situations where the buffer manager expects I/O to be largely sequential. Since most tables use 8K buffer, we set this value to 0 so that it is possible to increase give 8K buffer caches as much as possible. Should I/O be largely sequential for 16K buffer caches, individual buffers will be used.

15. dbms.private.*.p32k.dmf_memory – size in bytes of the buffer cache. This resource is a computed resource, for display purposes.

16. dbms.private.*.p4k.dmf_cache_size – number of individual buffers (single data pages) in the buffer manager for 4K buffer caches. Since most tables use 8K buffer, we increased this value for large settings only.

17. dbms.private.*.p4k.dmf_group_count – number of group buffers (readahead buffers) in the buffer manager. Group buffers are used for table scans and other situations where the buffer manager expects I/O to be largely sequential. Since most tables use 8K buffer, we increased this value for large settings only.

18. dbms.private.*.p4k.dmf_memory – size in bytes of the buffer cache. This resource is a computed resource, for display purposes.

19. dbms.private.*.p8k.dmf_cache_size – number of individual buffers (single data pages) in the buffer manager for 8K buffer caches. Since most tables use 8K buffer, we increased this value for medium and large settings.

20. dbms.private.*.p8k.dmf_group_count – number of group buffers (readahead buffers) in the buffer manager. Group buffers are used for table scans and other situations where the buffer manager expects I/O to be largely sequential. Since most tables use 8K buffer, we increased this value for medium and large settings.

21. dbms.private.*.p8k.dmf_memory – size in bytes of the buffer cache. This resource is a computed resource, for display purposes.

22. gcc.*.inbound_limit – specifies the maximum number of inbound connections which can be supported by a Net server created with this definition.

23. gcc.*.inbound_limit – specifies the maximum number of outbound connections which can be supported by a Net server created with this definition.

Suggested Regular Maintenance

Maintenance and Performance Tuning

The MDB must be regularly maintained to produce good performance. Indexes must be rebuilt and statistics regenerated to reflect reality. This can be accomplished by running optimizedb and usermod commands.

Ingres uses a Cost Based Query Optimizer to develop query execution plans – QEPs. Optimizing the DB affects speed of query processing. The more complete and accurate statistics, the more efficient query execution strategies – faster system performance.

Without statistics, Query Optimizer can only guess what the data looks like, and “assumptions” are usually not valid for application SQL queries.

The optimizedb command generates statistics that are used by the Ingres Query

Optimizer to select an efficient query processing strategy.

Statistics are generated on the specified columns, and stored in system catalogs

(iistats and iihistograms). Complete and accurate statistics in the system catalogs result in more efficient query execution strategies and faster system performance.

Statistics need to be refreshed only when a significant change in the distribution of a column’s values has occurred.

If you suddenly experience a performance issue or run out of disk space – think DB Maintenance!

The focus here is on daily maintenance – some less frequent actions require exclusive access to the database: reorganizing System Catalogs from time to time via sysmod is an example. However, it is prudent to run sysmod after optimizedb first time it is run on a database.

optimizedb – daily

optimizedb can be run while the Database is active. It should be run whenever a table is rebuilt or has major changes because inaccurate statistics can lead to wrong query execution plans to be chosen by optimizer and, therefore, to dramatic performance reduction. In order to avoid performance related issues we recommend optimizedb to be run daily!

Optimization requires disk space because temporary tables are being created. Statistics are stored in system catalogs (iistats and iihistograms).

Syntax: optimizedb -zk -zw -umdbadmin mdb

ckpdb (Checkpoint) - daily

ckpdb can be run while the Database is active. It creates a new checkpoint sequence, then copies log records of any DB changes that occurred during the checkpoint procedure to dump file. rollforwarddb (discussed later in this document) uses the dump file when it recovers a database that was checkpointed online.

Permission required: System administrator, DBA, or an Ingres user with operator privilege.

Syntax: ckpdb -umdbadmin mdb

IMPORTANT: Free disk space must be monitored closely! Checkpoint and journal files will build up on the disk. However, always keep previous versions of checkpoint, dump, and journal files on tape! We will discuss that in greater detail later – Backup/Recovery Recommendations.

usermod – daily

The usermod command modifies the user-defined tables of a database to their

currently defined storage structure and recreates any secondary indexes that are

currently defined.

Similar to the sysmod command for system catalogs, usermod is a useful utility

for maintaining user tables. It should be used on a regular (daily suggested) basis when:

Tables are in overflow

Need to reclaim space

Before optimizedb/sysmod

Can be run online where access to table is mostly permitted except at the end when it takes exclusive access to table.

Syntax: usermod -umdbadmin -online mdb

sysmod

System Tables also require maintenance as they are tables themselves and subject to data changes.

The sysmod command modifies the System Tables (catalogs) of a database to predetermined storage structures. The tables are modified to the most appropriate storage structure for accelerating query processing.

If you create the MDB and have no (or limited) data in it and then use it or bulk load data run sysmod. The same applies when spread of data changes.

The sysmod operation requires exclusive access to the database. However, we can tell sysmod to wait until the DB is free before executing – “+w” option.

Syntax: sysmod mdb

Relocate Database Files

Types of Files in an Ingres Database

There are 5 types of files in an Ingres Database:

Checkpoint – Static copy of the entire database

Data – User tables, indexes, and system catalogs

Dump – Records of changes to the database that occurred during the dump process. Used to recover databases checkpointed online

Journal – Dynamic records of changes made to the journaled tables after the last checkpoint. Take regular checkpoints to minimize recovery time and avoid wasting disk space with journal files

Work – Used for system work, such as sorting and creating temporary tables

Why Relocate?

If checkpoint, data, dump, journal and dump files are all currently located on the same disk, it is highly recommended that you relocate checkpoint, dump, and journal files to separate disks. That will:

Improve fault tolerance and aid recovery from hardware failure (i.e. disk with data files)

Potentially improve performance

Gain disk space

You can relocate checkpoint, dump, and journal files for an existing database. Locations must exist prior to this operation and must be specified with an appropriate Usage Type (that is, journal, checkpoint, or dump, depending on the type of file you want to relocate).

When you relocate checkpoint, journal, or dump files, the existing files are moved to the new location and any new files are created there.

IMPORTANT: Checkpoint, dump, and journal files can use only one location each.

Default Locations

Default Ingres locations are:

ii_checkpoint (checkpoint)

ii_database (data)

ii_dump (dump)

ii_journal (journal)

ii_work (work)

They are all set to the following:

Windows: C:\Program Files\CA\Ingres [EI]

UNIX\Linux: /opt/CA/IngresEI

Changing Default Locations – Windows

To change default locations under Windows edit the following entries in the sample.rsp file:

iidatabasedir="C:\INGRESNEWAREA"

iicheckpointdir="D:\INGRESAREA"

iijournaldir="D:\INGRESAREA"

iidumpdir="D:\INGRESAREA"

iiworkdir="C:\INGRESNEWAREA

Changing Default Locations – UNIX/Linux

Under Linux/Unix, first create a response file and include the following entries in it:

II_DATABASE=

II_CHECKPOINT=

II_JOURNAL=

II_DUMP=

II_WORK=

The directory must exist prior to installing Ingres and must be owned by the user that owns the installation! ingres by default. Mask: 700 (i.e. rwx --- ---).

Example: export II_RESPONSE_FILE=/ingres/install.rsp

Guidelines for Locations

Locations must exist prior to relocation operations and must be specified with appropriate usage type: Journal, Checkpoint, Dump, Data, and/or Work.

Existing files are moved to new locations. New files are created there.

Location areas are created using host OS facilities. Each area must have a specific subdirectory structure that parallels that of the corresponding default location:

MD C:\INGRESNEWAREA

MD C:\INGRESNEWAREA\INGRES

MD C:\INGRESNEWAREA\INGRES\DATA (or CKP - checkpoint, JNL - journal, DMP - dump, WORK - work)

MD C:\INGRESNEWAREA\INGRES\DATA\DEFAULT

New areas are then used in the Create Location dialog (VDBA) or in create location, alter location, and drop location SQL statements.

Create Location Dialog – VDBA

1. Launch Ingres Visual DBA

2. Right-click Locations and select Create…

3. Enter Location Name, Area and select Usage Type. Example:

a. Location Name: NEWLOCATION_JOURNAL

b. Area: C:\INGRESNEWAREA

c. Usage Type: Journal

Create Location Statement

Alternatively, you can also create locations via create location SQL statement:

CREATE LOCATION LOCATION_NAME WITH AREA=‘AREA_NAME’, USAGE=(USAGE_TYPE);

Where USAGE_TYPE = CHECKPOINT, DATA, WORK, DUMP, JOURNAL, ALL

[pic]

Notice that after running that SQL statement in Visual SQL, a new Location is created under Locations.

IMPORTANT: Notice that you must be connected to iidbdb in order to create locations using the create location statement.

Relocating Database Files

The relocatedb command moves the journal, dump, checkpoint, or default work

location for a database to another location (when a disk fills or is swapped out,

for example).

The relocatedb command can also make a copy of an entire database. Any

location in the original database can be moved to a new location in the new

database.

Relocating Checkpoint Files

Once new locations are created, you can use the relocatedb command to relocate DB files. For example:

relocatedb mdb -new_ckp_location=newlocation_checkpoint

This command does the following:

Performs an update to the iidbdb

Updates checkpoint location in the configuration file.

Copies checkpoint files from old checkpoint location (ii_checkpoint) to new checkpoint location (newlocation_checkpoint)

Deletes checkpoint files from old checkpoint location

Relocating Journal Files

Example:

relocatedb mdb -new_jnl_location=newlocation_journal

This command does the following:

Performs an update to the iidbdb

Updates journal location in the configuration file

Copies journal files from old journal location (ii_journal) to new journal location (newlocation_journal)

Deletes journal files from old journal location

Relocating Dump Files

Example:

relocatedb mdb -new_dump_location=newlocation_dump

This command does the following:

Performs an update to the iidbdb

Updates dump location in the configuration file

Copies dump files from old dump location (ii_dump) to new dump location (newlocation_dump)

Deletes dump files from old dump location

Relocating Work Files

Before relocating work files, we must extend the database to the new work location using the extenddb command. This command extends databases to new or existing locations. For example:

extenddb -lnewlocation_work mdb –Uwork

Once the DB has been extended, we tell Ingres that the new work location is the default for the database using the relocatedb command. For example:

relocatedb mdb -new_work_location=newlocation_work

[pic]

You will see in Visual DBA that the DB is now using two locations for work files, but the default one is no longer ii_work, but newlocation_work instead.

Next, we will use the unextenddb command to unextend a database location. In this case, we will unextend the location that used to be the default - ii_work.

The unextenddb command unextends a database location that was previously

extended by the extenddb command, and deletes the entry for the location in the

configuration file. The location can then be used again by the extenddb

command. If tables or files exist in the location, an error message is issued and

the database cannot be unextended at that location.

For example:

unextenddb -lii_work mdb –Uwork

[pic]

You will now see in Visual DBA that the DB is no longer using two locations for work file. The default one is no newlocation_work now.

IMPORTANT: After unextending a database location, you should checkpoint the database. Previous checkpoints cannot be used because they reference a location that is no longer accessible to the database!

Relocating Data Files - Data

Relocating data files is possible but not recommended!

Best approach – plan ahead so that relocating data files is never needed!

If you must relocate data files, first use the relocateddb command to copy the entire MDB. For example:

relocatedb mdb -new_database=newmdb -location=ii_database -new_location=newlocation_data

This command not only copies the entire MDB, but will also relocate data files. The default data location for newmdb will be newlocation_data. Checkpoint, dump, journal, and work locations were already relocated. So, the default locations for newmdb are: newlocation_checkpoint, newlocation_dump, newlocation_journal, newlocation_work, newlocation_data. This is what Visual DBA shows:

[pic]

Now that all DB files are relocated, destroy the original MDB. For example:

destroydb -umdbadmin mdb

Finally, copy newmdb back to mdb, and then destroy it:

relocatedb newmdb –new_database=mdb

destroydb -umdbadmin newmdb

This is what you will see in Visual DBA – MDB with all its files relocated:

[pic]

IMPORTANT: Checkpoint the relocated MDB immediately as there are no checkpoints at this point!

Troubleshooting Recommendations

Troubleshooting

Problems encountered can be with “host” product (e.g. Unicenter NSM) or Ingres as an installation or the MDB. Need to determine if a problem lies within the MDB or the product that installed the MDB.

First Steps

Is Ingres up and running?

On UNIX check processes in Task Manager or use ps –fu ingress

On Windows, verify Visual Manager icon is fully green. If not, double click on it to see which processes are red.

If extra or missing processes, stop and restart Ingres

Is MDB accessible?

Run sql mdb -umdbadmin

Database Information

Use infodb command to gather high level information on the DB, including:

Status of db

Location of files

History of checkpoints and journals

Must be DBA or system admin to run this command

Syntax: infodb mdb

Check MDB Error Log Files

errlog.log – default log for most programs

iiacp.log – archive error log

iircp.log – recovery error log

errvdba.log – VDBA error log

ingstart.log – Ingres startup log

install_mdb.log – information about MDB creation processing

Backup/Recovery Recommendations

When to backup

Good: Run at end of a housekeeping cycle

Better: Run prior to and after housekeeping (provides a recovery point in the event housekeeping fails)

Best: Run before and after housekeeping and at suitable points in between. This allows minimum time for recovery based on size of checkpoint and journal files!

Caution – checkpoint requires space similar to the full MDB

Checkpoint

Takes place online (while DB is in use) and is transparent to users. ckpdb command creates new checkpoint, then copies log records of any DB changes that occurred during checkpoint procedure to dump file.

Up to 99 checkpoints are kept in the f file for the DB. When this number is reached, the oldest is dropped from the list.

Permission required: System administrator, DBA, or an Ingres user with operator privilege.

Syntax: ckpdb -umdbadmin mdb

IMPORTANT: In order to prevent checkpoint and journal files from using too much disk space, we recommend that checkpoint, journal, and dump files from previous checkpoint sequences are moved to tape!

How do I find out what checkpoint, journal, and dump files need to remain on disk in order to recover from the current checkpoint sequence?

Run infodb mdb and search for the following data (bold red):

----Journal information---------------------------------------------------------

Checkpoint sequence : 8 Journal sequence : 1671

----Dump information------------------------------------------------------------

Checkpoint sequence : 8 Dump sequence : 7

----Checkpoint History for Journal----------------------------------------------

Date Ckp_sequence First_jnl Last_jnl valid mode

----------------------------------------------------------------------------

Wed Oct 19 14:07:15 2005 7 1342 1342 1 ONLINE

Wed Oct 19 14:31:28 2005 8 1343 1671 1 ONLINE

----Checkpoint History for Dump-------------------------------------------------

Date Ckp_sequence First_dmp Last_dmp valid mode

----------------------------------------------------------------------------

Wed Oct 19 14:07:15 2005 7 6 6 1 ONLINE

Wed Oct 19 14:31:28 2005 8 7 7 1 ONLINE

----Extent directory------------------------------------------------------------

Location Flags Physical_path

------------------------------------------------------------------

ii_journal JOURNAL C:\Program Files\CA\Ingres [EI]\ingres\jnl\default\mdb

ii_checkpoint CHECKPOINT C:\Program Files\CA\Ingres [EI]\ingres\ckp\default\mdb

ii_dump DUMP C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb

In the example above the CURRENT checkpoint sequence is 8.

It’s safe to move all other checkpoint sequence subdirectories to tape:

C:\Program Files\CA\Ingres [EI]\ingres\ckp\default\mdb\c0007001.ckp.

Should you need to recover from a previous checkpoint sequence, that sequence’s subdirectory must be restored.

The FIRST journal file for checkpoint sequence 8 is C:\Program Files\CA\Ingres [EI]\ingres\jnl\default\mdb\j0001343.jnl.

The LAST journal file for checkpoint sequence 8 is C:\Program Files\CA\Ingres [EI]\ingres\jnl\default\mdb\j0001671.jnl.

It is safe to move all other files to tape. Should you need to recover from a previous checkpoint sequence, FIRST and LAST journal files for that sequence need to be restored.

The FIRST dump files for checkpoint sequence 8 are:

C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb\c0000007.dmp

C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb\c0007000.lst

C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb\d0000007.dmp.

In the example above, the LAST dump files for checkpoint sequence 8 are also

C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb\c0000007.dmp

C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb\c0007000.lst

C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb\d0000007.dmp

It is safe to move all other files to tape, except C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb\f.

Should you need to recover from a previous checkpoint sequence, FIRST and LAST dump files for that sequence need to be restored.

Why C:\Program Files\CA\Ingres [EI]\ingres\ckp\default\mdb?

Well, as you can see above, that’s the current location of MDB checkpoint files – ii_checkpoint.

Why C:\Program Files\CA\Ingres [EI]\ingres\jnl\default\mdb?

Well, as you can see above, that’s the current location of MDB journal files – ii_journal.

Why C:\Program Files\CA\Ingres [EI]\ingres\dmp\default\mdb?

Well, as you can see above, that’s the current location of MDB dump files – ii_dump.

Unloading a Database – unloaddb

The unloaddb command creates command files that the DBA uses to unload the

data from a database and reload the data into a new, empty database.

Use unloaddb when a database must be totally rebuilt, or for checkpointing the

database. The unloaddb command unloads all objects in the database, including

tables, views, integrity constraints, permissions, forms, graphs, and report

definitions.

Two command files are created:

Unload file—contains commands to read sequentially through the database, copying every user table into its own file in the named directory.

Reload file—contains commands to load a new, empty database with the information contained in the files created by the unload file.

On Windows the file names are unload.bat and reload.bat. On UNIX, the file names are unload.ing and reload.ing.

unloading/reloading DB in binary mode can be accomplished by running unloaddb mdb. Character mode – useful for transporting databases

between computer systems whose internal representations of non-ASCII

data differ: unloaddb –c mdb

The DBA must execute those files to accomplish the unloading and reloading of

the database. It is important that the database be recreated with the reload file

before doing any work (for example, creating tables, forms, and reports) in the

new database.

IMPORTANT: To optimize performance, run the sysmod and optimizedb commands after recreating the database.

Copying a Database

The copydb command creates command files containing the SQL statements

required to copy and restore a database. The command creates the following two

command files in the current directory:

1. copy.out contains SQL commands to copy all tables, views, and procedures owned by the user into files in the specified directory.

2. copy.in contains SQL commands to copy the files into tables, recreate views, procedures, and indexes, and perform modifications.

copydb mdb –umdbadmin

copydb mdb –umdbadmin –c – creates printable data files. This is useful for transporting databases between computer systems whose internal representations of non-ASCII data differ. (When you restore a database from a file created using the –c flag, the copy command automatically converts data stored in this format

back to the appropriate type.)

Backup Recommendations

Keep last checkpoint and associated journal and dump files on disk. Take a copy of checkpoint, dump, and journal directories for DB to tape (or disk). Allow # of checkpoints in the f file to grow to the maximum number and then for DB to drop the oldest.

Recovery

The rollforwarddb command recovers a database or table from the last

checkpoint and the current journal and dump files.

If the target checkpoint was performed online (while the database was in use), then rollforwarddb does the following:

1. Restores the database from the checkpoint location to the database location

2. Applies the log records in the dump location to the database, which returns the database to its state when the checkpoint began

3. Applies the journal records to the database

If the target checkpoint was executed offline, then the second step is omitted.

Permission required: DBA or a system administrator.

Syntax: rollforwarddb –umdbadmin mdb

Reference List/Cheat Sheet

Command Cheat Sheet

usermod –online –umdbadmin mdb

ckpdb –umdbadmin mdb

optimizedb –zk –zw –umdbadmin

help mdb

Isql mdb -umdbadmin

rollforwarddb –umdbadmin mdb

sysmod mdb

unloaddb mdb

infodb mdb

For more info, consult Command Reference Guide from MDB installation.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download