Type Your Title Here - Oracle



Oracle Database tips, tricks and Techniques.

Steven George, Oracle Corporation

Introduction

Interested in learning new ways to make your job easier?  Discover tips and techniques that will allow you to be more efficient in the datacenter while improving the performance and manageability of your Oracle9i or Oracle8i  database.  We'll explore how some simple tricks can facilitate a variety of routine processes such as database maintenance, backup and recovery.

New database packages

Easy way to display explain plans

A new package, DBMS_XPLAN.DISPLAY, is the new way to format and explain a sql statement. Unlike before when you had to write a code to display the explain plan in a readable format you can now use a package. In the example below you still use the “explain plan for” statement, but you can now use a simple package to format the output from the plan table.

SQL> explain plan for

2 select count(deptno) from big_dept

3 where deptno NOT IN (select deptno from big_emp);

Explained.

SQL> SELECT *

2 FROM TABLE(dbms_xplan.display);

PLAN_TABLE_OUTPUT

----------------------------------------------------------------------

--------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost |

--------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 15 | 9 |

| 1 | SORT AGGREGATE | | 1 | 15 | |

|* 2 | HASH JOIN ANTI | | 1 | 15 | 9 |

| 3 | TABLE ACCESS FULL | BIG_DEPT | 82 | 1066 | 2 |

| 4 | TABLE ACCESS FULL | BIG_EMP | 5600 | 11200 | 6 |

--------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("BIG_DEPT"."DEPTNO"="BIG_EMP"."DEPTNO")

Note: cpu costing is off

17 rows selected.

Easy way to get complete table definitions from the database

The DBMS_METADATA package is a powerful tool for obtaining the complete definition of a schema object. It enables you to obtain all of the attributes of an object in one pass. The object is described as DDL that can be used to (re)create it.

In this example the GET_DDL function is used to fetch the DDL for all tables in the current schema, filtering out nested tables and overflow segments. The SET_TRANSFORM_PARAM (with the handle value equal to DBMS_METADATA.SESSION_TRANSFORM meaning "for the current session") is used to specify that storage clauses are not to be returned in the SQL DDL. Afterwards, the session-level transform parameters are reset to their defaults. Once set, transform parameter values remain in effect until specifically reset to their defaults.

EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(

DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);

SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)

FROM USER_ALL_TABLES u

WHERE u.nested='NO'

AND (u.iot_type is null or u.iot_type='IOT');

EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(

DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT');

Partitioning by Range-List method and New subpartition Templates

A new partitioning method Range-List has added to the existing set of partitioning methods being currently supported. The Range-List method naturally extends the List partitioning method introduced in 9i. The functionality supported with Range-Hash partitioning will now be extended to Range-List partitioning.

The Range-List method partitions the data using the Range method and, within each partition, subpartition’s it using the List method. The Range-List method allows explicit control over how rows map to subpartitions within a given range of values. Unlike Composite(Range-Hash) partitioning, multi-column sub-partitioning keys are not supported for Range-List partitioning. The Range-List partitioning method is well suited for historical data at the partition level and also allows for distribution of data, based on discrete column values at the subpartition level. Unordered and unrelated sets of data within a range of values can be grouped and organized together naturally using Range-List partitioning. The details of Range-List partitioning can best be described with an example. In this case we want to track sales data of products by quarters and within each quarter, group it by specified geographical regions i.e. we want to group quarterly sales figures by states.

CREATE TABLE quarterly_regional_sales

( deptno number, item_no varchar2(20),

txn_date date, txn_amount number, state varchar2(2)

)

PARTITION BY RANGE (txn_date)

SUBPARTITION BY LIST (state)

(

PARTITION q1_1999 VALUES LESS THAN (TO_DATE(’1-APR-1999’,’DD-MON-YYYY’))

( SUBPARTITION q1_1999_northwest VALUES (’OR’, ’WA’),

SUBPARTITION q1_1999_southwest VALUES (’AZ’, ’UT’),

SUBPARTITION q1_1999_northeast VALUES (’NY’, ’VM’, ’NJ’),

SUBPARTITION q1_1999_southeast VALUES (’FL’, ’GA’),

SUBPARTITION q1_1999_northcentral VALUES (’SD’, ’WI’),

SUBPARTITION q1_1999_southcentral VALUES (’NM’, ’TX’),

SUBPARTITION q1_1999_Others VALUES (DEFAULT)

),

PARTITION q2_1999 VALUES LESS THAN ( TO_DATE(’1-JUL-1999’,’DD-MON-YYYY’))

( SUBPARTITION q2_1999_northwest VALUES (’OR’, ’WA’),

SUBPARTITION q2_1999_southwest VALUES (’AZ’, ’UT’, ’NM’),

SUBPARTITION q2_1999_northeast VALUES (’NY’, ’VM’, ’NJ’),

SUBPARTITION q2_1999_others VALUES (DEFAULT)

))

;

Available only with list partitioning in Oracle9i Release 2, you can use the keyword DEFAULT to describe the value list for a partition. This identifies a partition that will accommodate rows that do not map into any of the other partitions. This can also be used with subpartitions.

Using Subpartition Templates to Describe Composite Partitioned Tables

You can create subpartitions in a composite partitioned table using a subpartition template. A subpartition template simplifies the specification of subpartitions by not requiring that a subpartition descriptor be specified for every partition in the table. Instead, you describe subpartitions only once in a template, then apply that subpartition template to every partition in the table.

The subpartition template is used whenever a subpartition descriptor is not specified for a partition. If a subpartition descriptor is specified, then it is used instead of the subpartition template for that partition. If no subpartition template is specified, and no subpartition descriptor is supplied for a partition, then a single default subpartition is created.

Specifying a Subpartition Template for a Range-Hash Partitioned Table

In the case of range-hash partitioned tables, the subpartition template can describe the subpartitions in detail, or it can specify just the number of hash subpartitions.

The following example creates a range-hash partitioned table using a subpartition template:

CREATE TABLE emp_sub_template (deptno NUMBER, empname VARCHAR(32), grade NUMBER)

PARTITION BY RANGE(deptno) SUBPARTITION BY HASH(empname)

SUBPARTITION TEMPLATE

(SUBPARTITION a TABLESPACE ts1,

SUBPARTITION b TABLESPACE ts2,

SUBPARTITION c TABLESPACE ts3,

SUBPARTITION d TABLESPACE ts4

)

(PARTITION p1 VALUES LESS THAN (1000),

PARTITION p2 VALUES LESS THAN (2000),

PARTITION p3 VALUES LESS THAN (MAXVALUE)

);

This example produces the following table description:

Every partition has four subpartitions as described in the subpartition template. Each subpartition has a tablespace specified. It is required that if a tablespace is specified for one subpartition in a subpartition template, then one must be specified for all. The names of the subpartitions are generated by concatenating the partition name with the subpartition name in the form:

partition name_subpartition name

The following query displays the subpartition names and tablespaces:

SQL> SELECT TABLESPACE_NAME, PARTITION_NAME, SUBPARTITION_NAME

2 FROM DBA_TAB_SUBPARTITIONS WHERE TABLE_NAME='EMP_SUB_TEMPLATE'

3 ORDER BY TABLESPACE_NAME;

TABLESPACE_NAME PARTITION_NAME SUBPARTITION_NAME

--------------- --------------- ------------------

TS1 P1 P1_A

TS1 P2 P2_A

TS1 P3 P3_A

TS2 P1 P1_B

TS2 P2 P2_B

TS2 P3 P3_B

TS3 P1 P1_C

TS3 P2 P2_C

TS3 P3 P3_C

TS4 P1 P1_D

TS4 P2 P2_D

TS4 P3 P3_D

12 rows selected.

Specifying a Subpartition Template for a Range-List Partitioned Table

The following example, for a range-list partitioned table, illustrates how using a subpartition template can help you stripe data across tablespaces. In this example a table is created where the table subpartitions are vertically striped, meaning that subpartition n from every partition is in the same tablespace.

CREATE TABLE stripe_regional_sales

( deptno number, item_no varchar2(20),

txn_date date, txn_amount number, state varchar2(2))

PARTITION BY RANGE (txn_date)

SUBPARTITION BY LIST (state)

SUBPARTITION TEMPLATE

(SUBPARTITION northwest VALUES ('OR', 'WA') TABLESPACE tbs_1,

SUBPARTITION southwest VALUES ('AZ', 'UT', 'NM') TABLESPACE tbs_2,

SUBPARTITION northeast VALUES ('NY', 'VM', 'NJ') TABLESPACE tbs_3,

SUBPARTITION southeast VALUES ('FL', 'GA') TABLESPACE tbs_4,

SUBPARTITION midwest VALUES ('SD', 'WI') TABLESPACE tbs_5,

SUBPARTITION south VALUES ('AL', 'AK') TABLESPACE tbs_6,

SUBPARTITION others VALUES (DEFAULT ) TABLESPACE tbs_7

)

(PARTITION q1_1999 VALUES LESS THAN ( TO_DATE('01-APR-1999','DD-MON-YYYY')),

PARTITION q2_1999 VALUES LESS THAN ( TO_DATE('01-JUL-1999','DD-MON-YYYY')),

PARTITION q3_1999 VALUES LESS THAN ( TO_DATE('01-OCT-1999','DD-MON-YYYY')),

PARTITION q4_1999 VALUES LESS THAN ( TO_DATE('1-JAN-2000','DD-MON-YYYY'))

);

If you specified the tablespaces at the partition level (for example, tbs_1 for partition q1_1999, tbs_2 for partition q1_1999, tbs_3 for partition q3_1999, and tbs_4 for partition q4_1999) and not in the subpartition template, then the table would be horizontally striped. All subpartitions would be in the tablespace of the owning partition.

Accessing Data Values From Redo Records

LogMiner2.0 features two new packaged functions to access (and analyze) the data values contained in the redo log. All the four functions share the following constraint: The functions can only be invoked in the context of a LogMiner session, and only when the from clause of the SQL select is V$LOGMNR_CONTENTS.

In Oracle 9i R2, we have added two new columns in v$logmnr_contents view called redo_value and undo_value. The columns are opaque handles to the redo and undo portion of the data respectively, and can be used to extract out individual change data with the help of the mine_value function.

dbms_logmnr.mine_value function

The function takes two arguments: a raw(32) column pointing to the opaque handle to the change data, and a varchar2 argument giving the fully qualified name of the column to be mined out from the opaque handle. The function always return a varchar2 type (thus if the column to be mined is number the number will be converted into string and returned). Only basic data types can be mined out (LOB, LONG and ADT columns can not be mined). For chained rows, the blocks are collapsed, and a single value is returned.

select dbms_logmnr.mine_value(redo_value, ‘SCOTT.EMP.SAL’) from v$logmnr_contents will return “SAL” column values culled from the redo component for each row corresponding to table SCOTT.EMP.

This allows us to do complex queries like:

select sql_redo from v$logmnr_contents where dbms_logmnr.mine_value(redo_value,

‘SCOTT.EMP.SAL’) > 2*dbms_logmnr.mine_value(undo_value, ‘SCOTT.EMP.SAL’) and operation = ‘UPDATE’;

will return all update statements executed against SCOTT.EMP that changed the SAL column to more than twice its original value.

Other LogMiner improvements include support for Large Objects (LOBs), LONGs, and parallel DML, as well as a multiversioned dictionary and real time mining of online logs.

LogMiner lets you make queries based on actual data values. For instance, you could perform a query to show all updates to scott.emp that increased sal more than a certain amount. Data such as this can be used to analyze system behavior and to perform auditing tasks.

LogMiner data extraction from redo logs is performed using two mine functions: DBMS_LOGMNR.MINE_VALUE and DBMS_LOGMNR.COLUMN_PRESENT. These functions are part of the DBMS_LOGMNR package. Support for these mine functions is provided by the REDO_VALUE and UNDO_VALUE columns in the V$LOGMNR_CONTENTS view.

The following is an example of how you could use the MINE_VALUE function to select all updates to scott.emp that increased the sal column to more than twice its original value:

SQL> SELECT SQL_REDO FROM V$LOGMNR_CONTENTS

2 WHERE

3 SEG_NAME = 'emp' AND

4 SEG_OWNER = 'SCOTT' AND

5 OPERATION = 'UPDATE' AND

6 DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'SCOTT.EMP.SAL') >

7 2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'SCOTT.EMP.SAL');

As shown in this example, the MINE_VALUE function takes two arguments. The first one specifies whether to mine the redo (REDO_VALUE) or undo (UNDO_VALUE) portion of the data. The second argument is a string that specifies the fully-qualified name of the column to be mined (in this case, SCOTT.EMP.SAL). The MINE_VALUE function always returns a string that can be converted back to the original datatype.

Speed up of application and database upgrades

To minimize the impact on production use of the database system, Oracle9i Release 2 introduces new features to speed up the upgrading time for applications and database systems. These include the ability to rename CONSTRAINTS and COLUMNS, and a significant reduction in the recompilation of PL/SQL packages and procedures. It provides fast loading of wrapped source, knows to do nothing upon loading an unchanged package, view or synonym, removes double invalidations and does parallel compilation in dependency order.

Use the rename column clause to rename a column of table. The new column name must not be the same as any other column name in table.

Renaming a Column

When you rename a column, Oracle handles dependent objects as follows:

• Function-based indexes and check constraints that depend on the renamed column remain valid.

• Dependent views, triggers, domain indexes, functions, procedures, and packages are marked INVALID. Oracle attempts to revalidate them when they are next accessed, but you may need to alter these objects with the new column name if revalidation fails.

Restrictions on the rename column clause:

• You cannot combine this clause with any of the other column_clauses in the same statement.

• You cannot rename a column that is used to define a bitmapped join index. Instead you must drop the index, rename the column, and re-create the index.

Alter Table RENAME COLUMN old_name TO new_name;

Renaming a Constraint

The RENAME CONSTRAINT clause lets you rename any existing constraint on table. The new constraint name cannot be the same as any existing constraint on any object in the same schema. All objects that are dependent on the constraint remain valid.

ALTER TABLE tablename RENAME CONSTRAINT Old_constraint_name

TO New_Constraint_name;

RMAN Improvements

SPFILE configuration file can now be automatically backed up and restored, again to simplify the backup and restore processes of the system. Another new capability in this area allows Recovery Manager (RMAN) to automate the space management of archived log files, thus freeing the DBA from managing the space allocation of the archived logs.

To help you preserve your current server side parameter file (SPFILE), RMAN can backup your SPFILE automatically. RMAN will do this if the instance was started with the SPFILE and the RMAN operation includes an automatic backup of the control file. This occurs when you issue any RMAN backup command that includes data file 1 (either specifically or implicitly, such as in the case of a BACKUP DATABASE command), or when the your RMAN configuration setting for CONTROLFILE AUTOBACKUP is ON and any of the following occur:

• You issue a BACKUP or COPY command at the RMAN prompt (first example)

• You execute a RUN block that contains a BACKUP or COPY command that is immediately followed by a command other than BACKUP or COPY

• You execute a RUN block in which the last command is either BACKUP or COPY

For backups made with the autobackup feature enabled, RMAN generates a separate backup piece to hold the SPFILE and control file backup. You can manually backup the SPFILE used to start your instance with the BACKUP command. If CONTROLFILE AUTOBACKUP is ON then RMAN will automatically back up the control file and server parameter file after structural changes to the database. The target database records the autobackup in the alert log. Unlike the RMAN BACKUP DATABASE command, the RESTORE DATABASE command does not restore the SPFILE (nor the control file) automatically.

The example shows the RMAN RESTORE command required to retrieve your SPFILE from the backup. If you have completely lost the SPFILE, then the RMAN STARTUP command generates a 'dummy' parameter file. This enables the instance to get to the NOMOUNT state, so that the SPFILE can be restored. After restoring the SPFILE you must bounce the instance so that the restored SPFILE parameter values take effect. The steps to follow are:

SET DBID 676549873;

STARTUP NOMOUNT FORCE;

RESTORE SPFILE FROM AUTOBACKUP;

STARTUP NOMOUNT FORCE;

RESTORE CONTROLFILE FROM AUTOBACKUP;

Additional options to manage server parameter file backups are provided through a

new SPFILE keyword in RMAN commands. These include the commands:

LIST BACKUP OF SPFILE

CROSSCHECK BACKUP OF SPFILE

CHANGE BACKUP OF SPFILE

DELETE BACKUP OF SPFILE

Simplified Database backup with RMAN

The easiest way to make backups is to configure automatic channels. For example, so long as you have already configured an sbt device type, you can configure a default sbt channel as follows (note that the PARMS value is vendor-specific):

CONFIGURE DEVICE TYPE sbt PARALLELISM 1;

CONFIGURE DEFAULT DEVICE TYPE TO sbt;

CONFIGURE CHANNEL DEVICE TYPE sbt PARMS 'ENV=(NSR_SERVER=bksvr1)';

Then, you can back up the database to tape at the RMAN prompt as follows:

BACKUP DATABASE;

RMAN preconfigures a DISK channel for you, so you can make disk backups using automatic channels without performing any configuration whatsoever.

The other method is to allocate channels manually within a run job. For example, this command allocates multiple disk channels and then backs up the database and archived redo logs:

RUN

{

ALLOCATE CHANNEL ch1 DEVICE TYPE DISK;

ALLOCATE CHANNEL ch2 DEVICE TYPE DISK;

ALLOCATE CHANNEL ch3 DEVICE TYPE DISK;

BACKUP DATABASE PLUS ARCHIVELOG;

}

RMAN: Control Archived Log Space Usage

During an RMAN recovery operation, archived log files may need to be retrieved from a backup set and stored on disk. You can direct RMAN to delete the archived logs after reading them, however space problems can occur because the process to restore the archive logs to disk is faster than the operation to apply the archived logs to the database. In previous releases, if the target disk location ran out of space, the whole recovery operation would fail.

To avoid this problem in Oracle9i Database Release 2, specify the maximum disk space available for the archive backup files when you use RECOVER…DELETE command in RMAN. This causes RMAN to delete used files when the next part of the backup set would exceed the specified storage. Include the MAXSIZE parameter in the RECOVER…DELETE command to indicate how much space you have available in bytes, with an optional K, M, or G. The example shows a recovery operation for which 100KB of disk space is made available for the archived logs.

If MAXSIZE is too small to hold a full archive log, RMAN will terminate the recovery operation with an error. If MAXSIZE is smaller than a backupset containing the required archived logs, the backup set will be read more than once, increasing the overall recovery time. In this case, RMAN will issue a warning advising you to increase the value of MAXSIZE.

RMAN> RECOVER DATABASE

2> DELETE

3> ARCHIVELOG MAXSIZE 100K;

Enterprise Manager Backup Wizard

[pic]

With Oracle9i Database Release 2, a new option “delete unnecessary backups” has been added to the Backup Selection page in the Backup Wizard. This option is enabled only when the target database is Oracle9i or later, and the retention policy has been set in the target database. The option allows you to delete unnecessary backups. Backups are deemed unnecessary based on your specifications in the retention policy. Deleting obsolete backups was possible in previous releases using the RMAN command:

RMAN> DELETE OBSOLETE REDUNDANCY = 3;

Backup Wizard: Deletion of Archived Logs

[pic]

In Oracle9i Database Release 2, a new RMAN command available with Enterprise Manager. You can delete old archivelogs while keeping recent archivelogs on disk. This command deletes archivelogs after they have been backed up at least a certain number of times. You may specify the minimum number of backups required before the archivelogs are deleted. The delete command is executed along with each backup job, but only after the backup operation is completed. Using Enterprise Manger you can:

Delete the archive files yourself (first button.)

Delete the archive logs if they have been backed up more than a given number of times.

This was introduced in an earlier version, and is executed using the RMAN command:

RMAN> DELETE OBSOLETE REDUNDANCY = 3;

Delete all archive logs created before the current backup time. This was introduced in an earlier version, and is executed using the RMAN command:

RMAN> BACKUP ARCHIVELOG ALL DELETE INPUT;

Backup Wizard: Override RMAN Configuration

[pic]

The Backup Wizard Override RMAN Configuration page appears only if the target

database is Oracle9i or later.

This page allows you to make a special backup without changing the stored configuration information. For example, you may override the retention policy and keep one backup for a relatively longer period. You may also choose to make a complete whole database backup by turning on the second and third option. Please note the “include all…” option is enabled only if at least one tablespace is configured as excluded from a database backup. The third option is only enabled if the database has been configured to “optimize backup”.

The RMAN equivalent commands of:

First button "Do not optimize the backup"

RMAN> CONFIGURE BACKUP OPTIMIZATION OFF;

Second button "Include all the tablespaces in a whole database backup"

RMAN> BACKUP DATABASE NOEXCLUDE;

Third button "Keep this backup or copy until the date specified"

RMAN> BACKUP DATABASE KEEP UNTIL TIME

2> "TO_DATE('11-NOV-2001', 'dd-mon-yyyy')";

DBNEWID Utility

DBNEWID is a database utility that can change the internal database identifier (DBID) and the database name (DBNAME) for an operational database. This utility is also used in creating logical standby databases.

Prior to the introduction of the DBNEWID utility, you could manually create a copy of a database and give it a new database name (DBNAME) by re-creating the control file. However, you could not give the database a new identifier (DBID). The DBID is an internal, unique identifier for a database. Because Recovery Manager (RMAN) distinguishes databases by DBID, you could not register a seed database and a manually copied database together in the same RMAN repository. The DBNEWID utility solves this problem by allowing you to change any of the following:

• The DBID of a database

• The DBNAME of a database

• Both the DBNAME and DBID of a database

d:\>nid

DBNEWID: Release 9.2.0.1.0 - Production

Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.

Keyword Description (Default)

----------------------------------------------------

TARGET Username/Password (NONE)

DBNAME New database name (NONE)

LOGFILE Output Log (NONE)

REVERT Revert failed change NO

SETNAME Set a new database name only NO

APPEND Append to output log NO

HELP Displays these messages NO

built-in advisories

In Oracle9i Database Release 2, Oracle continues to make the server more intelligent and self-managing, which frees the administrator from time-consuming tuning and diagnostics tasks. It provides a number of built-in, intelligent advisories for performance tuning. These built-in advisories show Shared Pool usage to improve parse time and minimize CPU usage, SQL execution memory for improvements in SQL execution time and to minimize unnecessary CPU and I/O use. For example, the Shared Pool Advisory provides information on the optimal size for the library cache on your specific system. The advisory information is displayed conveniently in Enterprise Manager and the library cache size can be changed dynamically directly from Enterprise Manager. In addition, there is an advisory for the Mean-Time-To-Recover setting, so the administrator can set time requirements to recover from a system crash, without jeopardizing run-time performance. The PGA advisory allows us to predict the result of altering PGA memory on the overall instance performance.

Shared Pool Advisory

The shared pool advisory statistics track the library cache's use of shared pool memory and predict the change in total instance-wide parse time for different sizes of the shared pool. Two new views, v$shared_pool_advice and v$library_cache_memory provide the information to determine how much memory the library cache is using, how much is currently pinned, how much is on the shared pool’s LRU list, as well as how much time might be lost or gained by changing the size of the shared pool. These statistics are reset if the STATISTICS_LEVEL parameter is set to Basic or when the instance is restarted. The v$shared_pool_advice view displays information about estimated parse time savings in different sizes of shared pool. The sizes range from 50% to 200% of current shared pool size, in equal intervals, and they are not configurable. If rows in the v$shared_pool_advice view have same values of parse time savings (given in the estd_lc_time_saved column), that indicates there would be no additional hits on those size ranges for library cache objects. However, if time saving values Manageability increase for larger pool sizes, this indicates that it may help to increase the shared pool size.

The v$library_cache_memory view displays information about memory allocated to library cache memory objects in different namespaces. A memory object is an internal grouping of memory for efficient management. A library cache object may consist of one or more memory objects.

Enterprise Manager Shared Pool Size Advisor

The Oracle Enterprise Manager Shared Pool Sizing Advisor guides you in finding the optimal size for the shared pool, based on the relative change in parse time savings. The SGA Advisor is available from Enterprise Manager by selecting:

Databases > Database Name > Instance > Configuration > Memory tab > Advice in

the SGA section.

Mean Time To Recover (MTTR) Advisory

The initialization parameter, FAST_START_MTTR_TARGET, introduced in Oracle9i, enables you to control the duration of instance recovery should a crash occur. To achieve fast recovery times, your instance incurs more overhead, mainly caused by additional disk writes. Unfortunately, in the previous release, it was very difficult to evaluate the impact of different FAST_START_MTTR_TARGET values on system performance.

The MTTR Advisory enables the instance to estimate the percentage change in the number of disk writes that would be caused by other values for the FAST_START_MTTR_TARGET parameter. By enabling the estimator while the instance runs a typical workload for a period of time, you can see the impact of either increasing or decreasing the parameter value.

Statistics for the estimates provided by the MTTR Advisory are collected when the initialization parameter, STATISTICS_LEVEL, is set to either TYPICAL or ALL and while the FAST_START_MTTR_TARGET parameter has a non-zero value. The v$mttr_target_advice view contains the values provided by the MTTR Advisory. The view has five rows that show the estimated disk activity for different settings of FAST_START_MTTR_TARGET: the current value; approximately one tenth of the current value; approximately one half of the current value; approximately one and half times the current value; and approximately twice the current value. It is possible for v$mttr_target_advice to have less than five rows if the resulting value for mttr_target_for_estimate falls outside the range of allowed values.

View MTTR Advisory with Enterprise Manager

The Enterprise Manager interface to the MTTR Advisory helps you select the best MTTR time for your needs by reporting the advice statistics in a graphical as well a tabular format.

The MTTR Advisor is available from Enterprise Manager by selecting:

Databases > Database Name > Instance > Configuration > Recovery tab > Advice next to Desired mean time to recover.

Note: The STATISTICS_LEVEL parameter, which is new in Oracle9i Database Release 2, also enables statistics collection for the PGA and shared pool advisories covered in the " Oracle9i: New Features Overview R2" lesson. In addition to the new advisories, this parameter also controls the buffer cache advisory introduced in the initial release of Oracle9i.

The old parameter, DB_CACHE_ADVICE, is deprecated.

V$PGA_TARGET_ADVICE

The new statistics to help you monitor the performance of the PGA memory component for a particular value of PGA_AGGREGATE_TARGET are based on two concepts: work areas groups and a computed PGA cache hit percentage value. By grouping work areas based on their optimal memory requirement, statistics can be collected for each group based on the number of optimal, one-pass, and multi-pass executions for each of these work area groups. With this finer granularity of work area statistics than previously available, you can more accurately predict how much memory is required across the instance to maximize the number of optimal executions.

The PGA cache hit percentage summarizes statistics on work area executions into a single measure of PGA performance for a given value of PGA_AGGREGATE_TARGET. The PGA cache hit percentage is derived from the number of work areas, which run optimal, the number of passes for the non-optimal work areas, and the sizes of the work areas. The new view, v$sql_workarea_histogram, enables you to study the nature of the work area workload over a chosen time period. The work areas are split into 33 groups based on their optimal memory requirements. Each group is defined by the lower bound on its optimal memory requirement, displayed in the low_optimal_size column, and its upper bound, displayed in the high_optimal_size column. For each group, the view accumulates the number of executions using optimal, one-pass, multi-pass memory since instance start up. These are displayed in the optimal_executions, onepass-executions, and multipasses_executions columns respectively. The total_executions column contains the sum of the other three execution counts. To determine if you have set a good value for the PGA target size, query the v$sql_workarea_histogram view using a statement such as the following, which converts the low_optimal_size and high_optimal_size column values to kilobytes:

SELECT low_optimal_size/1024 AS low_kb,

(high_optimal_size+1)/1024 AS high_kb,

ROUND(100*optimal_executions/total_executions)

AS optimal,

ROUND(100*onepass_executions/total_executions)

AS onepass,

ROUND(100*multipasses_executions/total_executions)

AS multipass

FROM v$sql_workarea_histogram

WHERE total_executions != 0

ORDER BY low_kb;

Of course, as with any monitoring effort using dynamic views, you should issue the query at the beginning and at the end of a typical work period and use the differences between the two results to determine what activity occurred during that period. Although it would be ideal for all work areas to execute in the optimal size, this goal is typically achieved by over-allocating memory to the PGA. If you graph the results of your query as a histogram, you can quickly identify the location in the graph where work groups begin to use one-pass, or even multi-pass, sizes. In the example shown the former occurs with a work area size of about 200KB and the latter about 100MB.

Based on your knowledge of the type of work being done and the current level of performance, you can decide if this is acceptable or if the PGA_AGGREGATE_TARGET parameter value should be changed.

If you monitor the PGA space consumption with the various views provided for that purpose (v$sql_workarea_histogram, v$pgastat, v$sql_workarea, and v$sql_workarea_active), you may decide that you need to reset your PGA_AGGREGATE_TARGET initialization parameter value to make better use of your available memory. In some cases, you may want to reduce the memory allocated, or you may realize that you should increase the value. To help you determine by what factor you should change the parameter value, you can use two new views, provided in Oracle9i Database Release 2, that provide PGA sizing advice. These views, v$pga_target_advice and v$pga_target_advice_histogram, accumulate advice statistics to help you tune the PGA_AGGREGATE_TARGET value. The views are only populated if PGA_AGGREGATE_TARGET is set to a non-zero value that enables auto management of the PGA. Also the initialization parameter STATISTICS_LEVEL value must be set to Typical or All. . Further, the view contents are refreshed when you shut down the instance or when you change the value of PGA_AGGREGATE_TARGET dynamically.

Rows in these two views correspond to setting the PGA_AGGREGATE_TARGET value to 0.125, 0.25, 0.5, 0.75, 1, 1.2, 1.4, 1.6, 1.8, 2, 3, 4, 6, and 8 times its current value. However, if these values are either less than 10MB or more than 256GB, they will not be included in the output.

To begin monitoring and tuning the PGA target size, you should issue query against the v$pga_target_advice view similar to:

SELECT ROUND(pga_target_for_estimate/1024/1024) AS

target_mb, estd_pga_cache_hit_percentage AS

cache_hit_percent,

estd_overalloc_count

FROM v$pga_target_advice

ORDER BY target_mb;

For output that is easier to read and use, this query converts the value of the pga_target_for_estimate column from bytes to megabytes. As mentioned earlier, you should issue this query at the beginning and at the end of a typical work period and use the differences between the two results to obtain advise related to the work performed during that period.

The output from the query on v$pga_target_advice contains up to 14 rows that contain estimated statistics based on the multiples of the current PGA_AGGREGATE_TARGET value. Assuming that your query produced the output shown above with the PGA_AGGREGATE_TARGET value set to 500MB, follow these steps to evaluate the results:

Step 1: Identify the first row with a value of zero in the estd_overallocation_count column. The rows above this one are for PGA_AGGREGATE_TARGET values (shown in the target_mb column) that are too small for the minimum PGA memory needs. In this case, this is the fifth row, which contains data for the current PGA_AGGREGATE_TARGET value, 500MB. Had the target_mb column value in the first row with a zero estd_overallocation_count been larger than the current setting, you should increase the PGA_AGGREGATE_TARGET parameter to at least this size.

Step 2: Examine the rows with PGA values larger than the minimum required to find the first pair of adjacent rows with values in the cache_hit_percent column that are within a few percentage points of each other. These rows indicate where, were you to graph the values, you would see an inflection point (sometimes referred to as a “knee”) in the curve. The optimal value for the PGA_AGGREGATE_TARGET parameter is at this inflection point, represented by the target_mb column value in the first of these two rows. Based on the above results, you should set the PGA_AGGREGATE_TARGET to 3000MB if you have sufficient memory. If you have even more memory available, you should assign it to some other use, such as one of the SGA components, rather than increasing the PGA target size.

To refine your analysis, you can look at the contents of the v$pga_target_advice_histogram view. This view is based on the v$sql_workarea_histogram view discussed earlier and contains rows for each of the same 33 work area groups used in that view. However, as with the v$pga_target_advice view, estimated data for each value is created for 14 multiples of the current PGA_AGGREGATE_TARGET setting. Therefore the v$pga_target_advice_histogram view potentially contains 14 different sets of values for each of the 33 work area groups, giving a possible total of 462 rows.

In addition to the columns corresponding to the work area lower and upper bounds and execution count columns in v$sql_workarea_histogram, v$pga_target_advice_histogram contains columns showing the size of PGA_AGGREGATE_TARGET used for the estimates in the row, the factor (multiple) of the current PGA_AGGREGATE_TARGET size this represents, whether advice is currently off or on (based on the setting of the STATISTICS_LEVEL parameter), and the number of work areas not included in the estimates due to space limitations in the instance. You would typically execute queries against the V$PGA_TARGET_ADVICE_HISTOGRAM view in the same way, and over the same period, as you do for your queries against V$PGA_TARGET_ADVICE. This way, you can check if a new value for PGA_AGGREGATE_TARGET that you identify by the method just discussed would result in reasonable optimal, one-pass, and multi-pass mode processing. You can select the appropriate rows by querying an appropriate value for the pga_target_factor column.

The following statement would generate output based on the value of 3000MB (that is, 6 times the current size) identified in the previous step.

SQL> SELECT low_optimal_size/1024 AS low_kb,

2 (high_optimal_size+1)/1024 AS high_kb,

3 estd_optimal_executions AS optimal,

4 estd_onepass_executions AS onepass,

5 estd_multipasses_executions AS mpass

6 FROM v$pga_target_advice_histogram

7 WHERE pga_target_factor = 6

8 AND estd_total_executions != 0

9 ORDER BY low_kb;

[pic]

The output from your query might look like the result set shown above. The first two columns indicate the upper and lower bound of each work area size while the remaining three columns indicate the estimated number of work areas that will process in each mode: optimal, one-pass, and multi-pass. The first non-zero value in each of these three columns indicates the work area size that will cause the work to execute in that mode. From this output, you can tell that all work areas up to 16,384KB (16MB), the upper bound of work areas shown in row 11, should execute in optimal mode if you increase the PGA_AGGREGATE_TARGET by a factor of six. Work areas larger than this would require the one-pass mode as shown by the values of 14, 103, and so on in the onepass column in the subsequent rows. None of the work areas processed during the data collection period would require the multi-pass mode based on the estimates for this PGA target size. Using the results of queries on the V$PGA_TARGET_ADVICE and V$PGA_TARGET_ADVICE_HISTOGRAM views, you can determine what value of PGA_AGGREGATE_TARGET is going to give you a good hit ratio on PGA memory while also minimizing non-optimal work area activity. Although the ideal would be a 100 percent hit ratio with nothing but optimal work area modes, this example indicates that a value of 3000MB will give you an 83 percent hit ratio and less than one percent of the work areas will require the one-pass mode. Increasing the PGA_AGGREGATE_TARGET value is not going to improve these results significantly.

Block level compression

Heap organized block level compression works by eliminating column values repetitions within individual blocks. Therefore, if the likelyhood of such repetitions within a single block is low, the expected compression ratio will be low or no compression will happen at all.  If the likelyhood of such repetitions is higher, the expected compression ratio will be higher. DBAs or application developers can take advantage of this fact by reorganizing rows in segments they want to compress to increase the likelyhood of such repetitions. An obvious way to do this for a single column table is to order table rows by the column value. This can be achieved by executing a CREATE TABLE AS SELECT with an order by clause. This simple technique can be also applied in an obvious way to a table where one column has low cardinality and other columns have high cardinalities, by ordering table rows by low cardinality column. For a table with multiple low cardinality columns, tuning to achive the best compression ratio can be substantially more complicated. We suggest to use the following approach. First, determine the column with the lowest cardinality. The cardinality of a column can be determined by executing

SELECT COUNT( DISTINCT(c)) from T;

Once the lowest cardinality column is determined (lets assume it is column C1), determine cardinalities of other columns for fixed values of the chosen column.

This can be measured by executing

SELECT SUM(count(*) * COUNT( DISTINCT(cK )) ) from T group by C1; for all columns cK other than C1.

Let's assume column C2 has the lowest such measure. Then determine the column with the lowest cardinality when the first two are fixed. This can be measured by executing

SELECT SUM(count(*) * COUNT( DISTINCT(cK )) ) from T group by C1, C2; for all columns cK other than C1 and C2.

By continuing this process, one would determine some column sequence C1, C2, ... CN.

We recommend to use this sequence as the sequence of columns in the ORDER BY clause of the create table compress statement. This approach does not guarantee the best possible compression ratio, but a very good compression ratio can often be achieved in practice.

Data compression is more beneficial for data warehousing environments with little or no updates and very few deletes. Data compression may be defined at the table or tablespace level. If a tablespace is created with the DEFAULT COMPRESSION clause then all tables in the tablespace will use compression unless the table is created with the DEFAULT NOCOMPRESS. Similarly, a table with compression enabled can be created in a tablespace that does not have compression set as the default. For partition tables, each partition can set compression individually. Good compression ratios can be achieved with no special tuning. However, if a better compression ratio is desired, tuning can improve the ratio. Grouping data with the same values can improve the compression ratio. This technique can be also applied to a table where one column has low cardinality and other columns have high cardinalities, by ordering table rows by low cardinality column. For a table with multiple low cardinality columns, tuning to achieve the best compression ratio can be substantially more complicated. The following approach is recommended. First, determine the column with the lowest cardinality. The cardinality of a column can be determined by executing

SELECT COUNT(DISTINCT(column_name))

FROM table_name;

Once the lowest cardinality column is determined (let's assume it is column column_name1), determine cardinalities of other columns for fixed values of the chosen column. This can be measured by executing

SELECT SUM(COUNT(*) *

COUNT( DISTINCT(column_nameK )))

FROM table_name

GROUP BY column_name1;

For all columns column_nameK other than column_name1. Let's assume column column_name2 has the lowest such measure. Then determine the column with the lowest cardinality when the first two are fixed. This can be measured by executing:

SELECT SUM(COUNT(*) *

COUNT( DISTINCT(column_nameK ))

)

FROM table_name

GROUP BY column_name1, column_name2;

For all columns column_nameK other than column_name1 and column_name2. By continuing this process, you would determine some column sequence column_name1, column_name2, ... column_nameN. Use this sequence as the sequence of columns in the ORDER BY clause of the CREATE TABLE COMPRESS statement. In Oracle9i Database Release 2, to achieve better data compression you can specify ENABLE ROW MOVEMENT for both partitioned and nonpartitioned tables. This is not required for data compression to work. However, it can lead to better results because it is an optimization. In previous releases row movement was only available for partitioned tables. Tables with this property enabled do not have stable rowids. Data compression takes advantage of this property to achieve a higher compression ratio by moving rows around when it is beneficial. Any application that accesses tables that have row movement enabled should not cache rowids. If row movement is disabled, which is the default, rowids are stable, and do not change for the lifetime of the rows.

Locally managed System Tablespace

Oracle strongly recommends that tablespaces be locally managed. In Oracle9i Database Release 2 you can create a locally managed SYSTEM tablespace. Having locally managed tablespaces eliminates extent fragmentation and provides superior performance benefits over dictionary managed tablespaces. When creating a database with a locally managed SYSTEM tablespace a default temporary tablespace must be included in the CREATE DATABASE statement.

When the SYSTEM tablespace is locally managed:

• It is not possible to create dictionary managed tablespaces

• It is not possible to migrate locally managed tablespace to dictionary managed type

• It is possible to transport dictionary managed tablespaces into a database with locally managed SYSTEM tablespace. Note that after the transport, dictionary managed tablespaces cannot be made read write.

• It is possible to DROP the READ ONLY dictionary managed tablespace when SYSTEM tablespace is locally managed. If the database is created using the Database Configuration Assistant (DBCA) then a locally managed SYSTEM tablespace is the default.

End user self correction

Oracle9i Database Release 2 now provides further support for Flashback Query, a feature introduced in Oracle9i Release 1. Now you can generate flashback information within a SQL statement, instead of just within a session. You can restore deleted rows, old values and the previous version of the table. You can also select a difference between two times. You can perform flashback queries using the AS OF clause of the SELECT statement rather than going through the DBMS_FLASHBACK package. This technique is very flexible, allowing you to perform joins, set operations, subqueries, and views using different date/time or SCN settings for each table in the query. You can also restore or capture past data by using flashback queries inside INSERT or CREATE TABLE AS SELECT statements.

-- Query based on past SCN number:

SELECT employee_id, manager_id

FROM employees

AS OF SCN 81591;

-- Find changes made between 1-2 hours ago:

SELECT a.last_name, b.last_name

FROM employees AS OF TIMESTAMP SYSDATE-1/24 a

FULL OUTER JOIN

employees AS OF TIMESTAMP SYSDATE-2/24 b

ON (a.employee_id = b.employee_id)

-- Create new table based on old data:

CREATE TABLE new_employees AS

SELECT * FROM employees AS OF SCN 90932;

You can create a view that refers to past data by using the AS OF clause in the SELECT statement that defines the view. If you specify a relative time by subtracting from SYSDATE, the past time is recalculated for each query. For example:

SQL> select sum(salary) , TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS')"NOW"

2 from employees;

SUM(SALARY) NOW

----------- -------------------

1070 10-12-2002 07:34:54

SQL> update employees

2 set salary = 42;

107 rows updated.

SQL> commit;

Commit complete.

SQL> CREATE VIEW minutes_ago AS

2 SELECT * FROM EMPLOYEES AS OF

3 TIMESTAMP (SYSTIMESTAMP - INTERVAL '5' MINUTE);

View created.

SQL> select sum(salary), TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS')"NOW" from minutes_ago;

SUM(SALARY) NOW

----------- -------------------

1070 10-12-2002 07:34:54

SQL> select sum(salary) , TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS')"NOW"

2 from employees;

SUM(SALARY) NOW

----------- -------------------

4494 10-12-2002 07:34:54

When using this technique, remember that daylight savings time and leap years can cause anomalous results. For example, SYSDATE - 1 might refer to 23 or 25 hours ago, shortly after a change in daylight savings time.

Note: Most of this material was taken from the following Oracle University courses:

Oracle9i Database: New Features for Administrators

Oracle9i: New Features Overview

And

Oracle University Online Learning -

Oracle9i Database Release 2: New Features Overview

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download