40144 - Data Guard SQL Apply - Back to the Future!



Getting Under the Hood with Data Guard SQL ApplyData Guard SQL Apply: Back to the Future

Larry M. Carpenter, Oracle Corporation

Viv Schupmann, Oracle Corporation

Introduction

In Oracle9i Database, Data Guard introduced a new type of standby technology called SQL Apply which maintains a Logical standby database (a transactional copy of the primary database) in addition to Data Guard’s existing Physical standby capability.Oracle9i Database includes many new features in Oracle Data Guard to offer disaster recovery (DR) and data protection. In addition to significant enhancements to the existing Redo Apply feature, Data Guard includes new SQL Apply technology, which boosts the flexibility of the data protection and usage of Data Guard through the support of logical standby databases. SQL Apply converts redo log files into SQL statements for re-execution on a standby database. This helps reduce the risk of corruption propagation to a standby, and allows reports to be run concurrently on the standby database while logs are being applied.

In addition, Data Guard in Oracle9i Database Release 2 includes enhancements to the Data Guard Broker framework to automate tasks associated with managing both physical and logical standby databases, including initial instantiation, failover, and graceful primary-to-secondary switchover.

Since its introduction, Logical Standby databases have been deployed in production at many sites worldwide augmenting customers’ Disaster Recovery strategies and providing users with more access to the standby data while removing the overhead of running reports and queries on the production database. At the same time these actions in no way affect the capabilities and recoverability of the Disaster Recovery strategy already in place.

This paper provides the detailsa quick review of the Data Guard Data Guard SQL Apply architecture, how some customers are using SQL Apply today in production and what’s coming in Oracle Database 10g to improve the Data Guard SQL Apply technology, features, and configuration best practices, and demonstrates how it can be used to enhance an effective solution to achieve the high-availability goals of an enterprise.

.

Overview of Redo Apply and SQL ApplyA Quick Review

Using Data Guard, When transactions that make changes to the pPrimary database, generate redo data generated by these changes, which is archived locally and it is sent to the pPhysical or lLogical standby databases.

When the redo data is transmitted from the pPrimary database, it is archived on the standby system in the form of archived redo logs. Then, the archived redo logs are automatically applied on the standby database to maintain synchronization with the pPrimary database.

and to allow transactionally consistent access to the data.

Prior to Oracle9i Database Release 2, there has been only the pPhysical standby database implementation where the standby database can be in recovery or open read-only . only. In other words, while the pPhysical standby database is applying logs it cannot be opened for reporting and vice versa. However, even when opened in read-only mode, the pPhysical standby database continues to receive redo data from the primary database, maintaining data protection.

With lLogical standby databases, you can have the database available for reporting and be applying the logs to the standby at the same time, allowing the logical standby database to be used for other business activities. Changes from the pPrimary database can be applied concurrently with end user access, because the lLogical standby database remains open for updates made through the execution of SQL statements.

Logical standby databases use the same log transport services as pPhysical standby databases. Although the method of transmitting redo data from the pPrimary database to the standby database is always the same, the main difference between pPhysical and lLogical standby databases is the manner in which the redo logs are applied.

Redo Apply Architecture

Figure 1 shows Redo Apply automatically sending the pPrimary database’s redo data and applying that redo to a p Physical standby database. The redo can be shipped as it is generated or archived on the pPrimary database. The redo logs are applied to each pPhysical standby database using standard Oracle recovery techniquesData Guard managed recovery (MRP).

[pic]

[pic]

Figure 1 – Redo Apply Architecture

The Physical standby database is an exact copy of the Primary (or Production) database and can be used not only for Disaster Recovery in the event of a failure at the primary site but also for planned events (such as system upgrades or operating system upgrades) and performing backups of the primary database.

SQL Apply Architecture

The SQL Apply technology first transforms redo data into SQL statements and then executes the generated SQL statements on the logical standby database. This is done with the use of LogMiner™ technology. Because the logical standby database is updated by applying SQL statements, it must remain open. This allows users to access the standby database for queries and reporting purposes at any time. Figure 2 shows the SQL Apply architecture.

[pic]

Figure 2 – SQL Apply Architecture

[pic]

Although the logical standby database is open for read/write operations, the tables being maintained by SQL generated from a primary database log are read-only to users of the logical standby. While those tables are being updated, they can be used simultaneously for other tasks such as reporting, summations, and queries. The DBA can optimize these tasks by creating additional indexes and materialized views on the maintained tables, but the tables must maintain logical consistency from an application access perspective in order to fulfill the role of a standby data source. Hence, a logical standby database can be used for other business purposes in addition to disaster recovery.

A logical standby database has some restrictions on datatypes, types of tables, and types of data definition language (DDL). Unsupported datatypes and tables are described in more detail in the Oracle® Data Guard Oracle Data Guard Concepts and Administration documentation. Later in this paper we will discuss what of these restrictions have been lifted in Oracle Database 10g.

Process Model

Data Guard uses several processes to achieve the automation necessary for disaster recovery and high availability. The following figure shows the Oracle processes involved in the operation of an Oracle9i Data Guard solution.

[pic]

The major components of the Oracle Data Guard architecture are:

Log Writer Process

The log writer process (LGWR) collects transaction redo, updates the online redo logs, and ships the redo information directly to the RFS process on the standby database.

Archiver Processes

The archiver process (ARCn) or a SQL session performing an archival operation creates a copy of the online redo logs locally for use in a primary database recovery. The ARCn process may also ship the redo data to the RFS process while simultaneously archiving the online log. It is also responsible for proactively detecting and resolving gaps on all standby databases.

During an extended network disconnection, redo data destined for a standby database is accumulated in archive logs on the primary database. When communications are restored, the archive process sends the redo data to resynchronize standby databases that diverged from the primary.

Fetch archive log (FAL) (Physical Standby Databases Only)

Fetch archive log (FAL) (physical standby databases only) provides a client/server mechanism that the Redo Apply process can use to resolve gaps detected in the range of archive logs generated at the primary database and received at the standby database. FAL also manages the process of retrieving missing or damaged log files.

Remote File Server

The remote file server (RFS) process is responsible for receiving the redo data from the primary database, acknowledging receipt of the redo data, checking that files are complete, and writing changes to standby redo logs (physical standby database only) or archived redo logs at the standby site.

Redo Apply

Redo Apply manages the application of redo information from the primary site to a physical standby database. If standby redo logs are used,

the ARCn process on the physical standby database archives the standby redo logs to be applied by the managed recovery process (MRP). If you start the managed recovery with the SQL statement ALTER DATABASE RECOVER MANAGED STANDBY DATABASE, this foreground session will do the recovery. If you use the optional clause: DISCONNECT [FROM SESSION] then the MRP background process will be started. If you use Data Guard Broker to manage your standby databases, the broker will always startup the MRP background process for a physical standby database.

SSQL Apply Processing

SQL Apply manages the application of redo data received from the primary site to a logical standby database. The LSP coordinates parallel background processes to convert transaction information back to SQL and execute those SQL statements on the logical standby database.

In order for SQL Apply to begin applying redo from the primary database a LogMiner™ dictionary must first be built on the primary database and processed on the logical standby database. During the logical standby database creation, one of the steps is to use a Data Guard supplied package that performs the dictionary build for you and puts it into the redo stream for processing on the logical standby database. For example, the statement:

SQL> EXECUTE DBMS_LOGSTDBY.BUILD;

will call the LogMiner™ dictionary build package with the appropriate parameters to write out the primary’s dictionary information to the redo stream where it will be archived to the archive logs and sent to the logical standby for processing. To ensure that the redo logs contain information of value to you, you must enable supplemental logging with the Primary Key and Unique Index attributes. Depending on the size of the dictionary, it may be contained in multiple redo logs. Provided the relevant redo logs have been archived, you can find out which redo logs contain the start and end of an extracted dictionary. To do so, query the V$ARCHIVED_LOG view, as follows:

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';

To start SQL Apply, use the following statement:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY INITIAL;

Note: Only include the INITIAL keyword the first time you start applying data from redo logs to the standby database. For example, the following statements show how to subsequently stop and start SQL apply operations:

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

The following figure (Figure 3) shows the LSP and background PX processes involved in the SQL Apply engine:

[pic]

Figure 3 – SQL Apply Processing Flow

• LSP0 process coordinates the parallel mining processes with the parallel applying processes.

PX processes are actually two groups of parallel execution processes that are automatically started by LSP0.

The first group of PX processes uses the LogMiner™ to mine redo logs received from the primary database. The second group is used to apply these log files to the logical standby database. What is important to note is that the mining and applying phases are done in parallel, and are synchronized by LSP0 to maintain a high level of database performance. At all times during the application of redo, the Logical standby database is open and the data contained within available for use.

[pic]

Managing SQL Apply

The DBMS_LOGSTDBY PL/SQL package and DBA_LOGSTDBY views include several procedures to help you manage, monitor, and tune SQL Apply operations on logical standby databases.

Monitoring Logical Standby

The standard tools used to monitor SQL Apply operations include the Data Guard Manager GUI and various database views. For example, the DBA_LOGSTDBY_PROGRESS view is useful in determining the status of SQL Apply processing. Once SQL Apply operations are started using the ALTER DATABASE START LOGICAL STANDBY APPLY statement, a DBA can monitor its progress by querying the DBA_LOGSTDBY_PROGRESS view. For example:

SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;

APPLIED_SCN NEWEST_SCN

----------- ----------

144146 144146

When the numbers in the APPLIED_SCN and NEWEST_SCN columns are equal (as shown in the above example), it means that all of the available data in the redo log was applied. These values can be compared to the values in the FIRST_CHANGE# column in the DBA_LOGSTDBY_LOG view to see how much log information has to be applied and how much remains.

For example:

SQL> SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#,

2> TIMESTAMP, DICT_BEGIN, DICT_END, THREAD# FROM DBA_LOGSTDBY_LOG

3> ORDER BY SEQUENCE#;

FILE_NAME SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR#

------------------------- ---- ------------- ------------ -------- --- --- ----

/oracle/dbs/hq_nyc_2.log 2 101579 101588 11:02:58 NO NO 1

/oracle/dbs/hq_nyc_3.log 3 101588 142065 11:02:02 NO NO 1

/oracle/dbs/hq_nyc_4.log 4 142065 142307 11:02:10 NO NO 1

/oracle/dbs/hq_nyc_5.log 5 142307 142739 11:02:48 YES YES 1

/oracle/dbs/hq_nyc_6.log 6 142739 143973 12:02:10 NO NO 1

/oracle/dbs/hq_nyc_7.log 7 143973 144042 01:02:11 NO NO 1

/oracle/dbs/hq_nyc_8.log 8 144042 144051 01:02:01 NO NO 1

/oracle/dbs/hq_nyc_9.log 9 144051 144054 01:02:16 NO NO 1 /oracle/dbs/hq_nyc_10.log 10 144054 144057 01:02:21 NO NO 1

/oracle/dbs/hq_nyc_11.log 11 144057 144060 01:02:26 NO NO 1

/oracle/dbs/hq_nyc_12.log 12 144060 144089 01:02:30 NO NO 1

/oracle/dbs/hq_nyc_13.log 13 144089 144147 01:02:41 NO NO 1

The output from this query shows that a LogMiner dictionary build starts at log file sequence number 5. The most recent archive log file is sequence number 13 and it was received at the logical standby database at 01:02:41.

Another useful view is DBA_LOGSTDBY_EVENTS that can be queried to display a table of events that contains interesting activity from SQL Apply operations. In particular, DDL execution or anything that generates an error is recorded in the events table. Using the DBMS_LOGSTDBY.MAX_EVENTS_RECORDED procedure, you can control what and how much activity is recorded in the DBA_LOGSTDBY_EVENTS table. By default, 100 records are stored in this table, but you can increase that to 200 by specifying the following PL/SQL statement:

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET(’max_events_recorded’, 200);

Note: Use the DBMS_LOGSTDBY.APPLY_UNSET procedure if you want to reset the value of specific parameters to the system default values.

If the SQL Apply has stopped the last record in the DBA_LOGSTDBY_EVENTS view will provide the information to determine why the apply stopped.

Tuning a Logical Standby

A logical standby database performance with regard to the rate that SQL Apply can apply transactions can be tuned with the MAX_SGA, MAX_SERVERS, and TRANSACTION_CONSISTENCY parameters of the DBMS_LOGSTDBY.APPLY_SET procedure.

Controlling memory usage on the logical standby database

Use the DBMS_LOGSTDBY.APPLY_SET MAX_SGA parameter to specify the percentage of SGA (System Global Area) of the standby database that will be used to cache changes from the Redo logs. This is not pre-allocated at instance startup, but it is allocated from the shared pool as needed, thus enabling it to be changed dynamically. The logical standby infrastructure attempts to contain its memory consumption below this limit. Note that underneath the SQL Apply technology, LogMiner is reading the redo stream and assembling whole transactions. These transactions are staged in the SGA. Thus, setting this parameter too low may force LogMiner to swap data in and out of memory and thereby impact performance. Setting it too high can cause an overall system performance degradation, because a logical standby database is an open database. By default, SQL Apply uses 25 percent of the shared pool space.

Adjusting the maximum number of parallel execution processes

SQL Apply employs Oracle Parallel Execution framework to spawn multiple processes that cooperate to mine the redo stream and apply assembled transactions in parallel. Depending on the system load, the MAX_SERVERS parameter can be used to tune the performance of the system. As a rule of thumb this should be set to twice the number of available CPUs in the system. For example, if parallel queries are routinely being performed by applications, a certain number of parallel servers should be reserved for those queries. To allocate 30 parallel servers for logical standby log apply services, enter the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_SERVERS', 30);

Thus, if the database initialization parameter, PARALLEL_MAX_SERVERS, is set to 50, 30 of these servers will be available for logical standby processing and 20 parallel query servers will be allocated for parallel query processing.

Adjusting the transaction consistency

The tables being maintained by SQL generated from a primary database log will, of course, be read-only accessible to users of the logical standby database. These tables can have different indexes and physical characteristics from the primary database, but the tables have to maintain transaction consistency from an application access perspective to fulfill the role as a standby data source. Use the TRANSACTION_CONSISTENCY parameter of the DBMS_LOGSTDBY.APPLY_SET procedure to control how transactions are applied to the logical standby database:

• FULL CONSISTENCY: The default setting is FULL, which applies transactions to the logical standby database in the same order in which they were committed on the primary database. (Transactions are applied in commit SCN order.) This incurs a performance penalty, but will ensure all states seen in the primary will also be seen at the standby. This is the default parameter setting.

• READ_ONLY: Transactions are committed out of order (which provides better performance), but the order is periodically enforced. SQL SELECT statements, executed on the standby database, always return consistent results based on the last consistent SCN known to the apply engine. SQL Apply periodically refreshes an SCN maintained in SGA that represents a consistent state. Queries executed on the logical standby database, automatically use Oracle Flashback to the maintained SCN. This is beneficial when the logical standby database is being used to generate reports. Any Oracle Flashback restrictions apply to this mode.

• NONE: Transactions are applied out of order. This results in the best performance of the three modes. However, this setting might give you inconsistent results on the standby database. If applications that are reading the logical standby database make no assumptions about transaction order, this option works well. For example, on the primary database, one transaction added a new customer and a second transaction added a new order for that customer. On the standby database, those transactions may be reversed. The order for the new customer might be added first. If you then run a reporting application on the standby database, which expects to find a customer for the new order, the reporting application might fail because constraints are not checked.

Increasing the Performance of the SQL Operations

One of the biggest factors in increasing the performance of SQL Apply lies with the tables being updated on the logical standby database. During the creation of the logical standby database the LogMiner™ dictionary build provided the information about the structure and format of the tables that will be maintained in the logical standby database. Since the actual SQL executed on the primary is not contained in the redo stream it is this dictionary and the block redo that are used to reconstruct the SQL to be executed on the standby. Therefore the resultant SQL will bear very little resemblance to the original SQL. For example, an UPDATE statement that updates 1000 rows in the primary will be reconstructed as 1000 UPDATE statements on the standby. If there is no usable index on the table, this will most likely result in 1 table scan on the primary but on the standby this will cause 1000 table scans severely impacting the performance of the SQL Apply. If it is not desirable to add a unique index on the primary it will become extremely important that you add the appropriate index on the standby. How to do this is described in the section on Modifying The Logical Standby Database.

Protection from Human Errors and Data Corruption

Administrators can optionally delay the application of changes from a primary database to prevent the application of changes to a standby database. Using the ability to stagger the application of changes to a standby database enables the protection of production data from user errors or corruptions, file system failures, or overwritten disk volumes. Changes to a primary database are sent to a standby database system in real time where they can be stored in the form of archived redo log files and applied after a user-specified delay expires. It is then possible to wait a sufficient amount of time to elapse to allow for the detection of an erroneous change at the primary site and stop the application of changes to the standby database image. In the event of any production problem (program, user, or data corruption), the DBA can suspend the SQL Apply process and bring the standby database up to the point just before it failed.

Use the DELAY=minutes attribute of the LOG_ARCHIVE_DEST_n initialization parameter on the primary database to specify a time lag for applying redo logs at the standby site. The DELAY interval is relative to when the archived redo log is complete at the destination. It does not delay the transport of the redo data to the standby database.

Specifying an apply delay interval on the primary database is the same for both logical and physical standby databases. However, on a logical standby database, if the primary database is no longer available, you can cancel the apply delay interval using the following PL/SQL command:

SQL> EXECUTE DBMS_LOGSTDBY.APPLY_UNSET('APPLY_DELAY');

After you issue this statement, you can restart SQL Apply to have it immediately begins applying all archived redo logs available at the standby.

Configuring SQL Apply

By default, all SQL statements executed on the primary database are applied to a logical standby database. However, if only a subset of activity on a primary database is of interest on the standby database, the DBMS_LOGSTDBY.SKIP procedure can be used to define filters that prevent SQL Apply from issuing the SQL statements on the logical standby database. Data Guard can filter or skip the following types of SQL statements so that they are not applied on the logical standby database:

• DML or DDL changes for tables

• CREATE, ALTER, or DROP INDEX DDL statements

• CREATE, ALTER, DROP, or TRUNCATE TABLE statements

• CREATE, ALTER, or DROP TABLESPACE statements

• CREATE or DROP VIEW statements

Use the SKIP procedure with caution, particularly when skipping DDL statements. If a CREATE TABLE statement is skipped, for example, you must also skip any other DDL or DML statements that refer to that table. Otherwise, these statements will fail and cause an exception. When this happens, the SQL Apply services stop running and will need to be manually restarted after correcting the error.

Skipping DML and DDL statements for Schema and Non-schema Operations

The following code example demonstrates how to skip all SQL apply operations that reference the EMP table in a logical standby database.

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'SCOTT', 'EMP', NULL);

SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML', 'SCOTT', 'EMP', NULL);

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

The next example skips CREATE TABLESPACE and ALTER TABLESPACE for non-schema DDL operations and then queries the DBA_LOGSTDBY_SKIP view to ensure these operations are being skipped.

SQL> EXEC DBMS_LOGSTDBY.SKIP(‘CREATE TABLESPACE’, NULL, NULL, NULL);

SQL> EXEC DBMS_LOGSTDBY.SKIP(‘ALTER TABLESPACE’, NULL, NULL, NULL);

SQL> SELECT * FROM DBA_LOGSTDBY_SKIP;

In addition to skipping DML and DDL statements for schema and non-schema operations, you can also skip specific DML and DDL operations as well. For instance, if no DML changes to the table MENT_TBL are required, the DBA can direct SQL Apply to skip DML associated with that table. In this example, the package call would look like the following:

SQL> EXECUTE DBMS_LOGSTDBY.SKIP (‘DML’, ‘CUSTOMER’, ‘COMMENT_TBL’, NULL);

Ignoring Logical Standby Database Failures

An important skip tool is DBMS_LOGSTDBY.SKIP_ERROR. Upon encountering an error, SQL Apply uses the criteria contained in this procedure to determine if the error should cause SQL Apply to stop. All errors to be skipped are stored in system tables that describe how exceptions should be handled.

Depending on how important a table is, you might want to do one of the following:

• Ignore failures for a table or specific DDL.

• Associate a stored procedure with a filter so that runtime determinations can be made whether to skip the statement, execute this statement, or execute a replacement statement.

Taking one of these actions prevents the SQL Apply operations from stopping. Later, you can query the DBA_LOGSTDBY_EVENTS view to find and correct any problems that exist.

Additionally, you can indicate what type of events you want recorded. By default, everything is recorded in the DBA_LOGSTDBY_EVENTS table. However, you can set the RECORD_SKIP_DDL, RECORD_SKIP_ERRORS, and RECORD_APPLIED_DDL parameters to the APPLY_SET procedure to FALSE to avoid recording these events. Errors that cause SQL Apply operations to stop are always recorded in the events table (unless access to the system tablespace is an issue). These events are always put into the ALERT.LOG file as well, with the phrase ’LOGSTDBY event’ included in the text. When querying the view, select the columns in order by EVENT_TIME, COMMIT_SCN, and CURRENT_SCN. This ordering ensures that a shutdown failure appears last in the view.

For example, the following SKIP_ERROR procedure specifies the criteria SQL Apply should follow if an error is encountered:

DBMS_LOGSTDBY.SKIP_ERROR('DDL', 'JOE', 'APPTEMP', null);

Upon encountering an error, SQL Apply uses the criteria contained in this procedure to determine if the error should cause applying logs should stop:

• DDL indicates that errors caused by a DDL transaction should be skipped

• joe is the name of the schema.

• apptemp is the name of the object associated with the DDL SQL statements.

• null indicates that the error will be ignored and a stored procedure will not be called.

Ignoring Transaction Failures

The SKIP_TRANSACTION procedure provides a way to skip (ignore) applying failed transactions to the logical standby database. You can skip specific transactions by specifying transaction identification information.

You may want to use the SKIP_TRANSACTION procedure to skip a transaction that has already failed or that might otherwise cause log apply services to stop. If log apply services stop due to a particular transaction (for example, a DDL transaction), you can specify that transaction ID and then continue to apply. You can call this procedure multiple times for as many transactions as you want log apply services to ignore.

If an error was caused by a DDL transaction that contained a file specification that does not match in the logical standby database environment, perform the following steps to fix the problem. For example, to fix an ADD DATAFILE pathname problem, perform the following steps:

• Use the DBMS_LOGSTDBY.GUARD_BYPASS_ON procedure to bypass the database guard so you can make modifications to the logical standby database:

SQL> EXECUTE DBMS_LOGSTDBY.GUARD_BYPASS_ON;

• Execute the DDL statement, using the correct file specification, and then reenable the database guard. For example:

SQL> ALTER TABLESPACE t_table ADD DATAFILE '/dbs/t_db.f' SIZE 100M REUSE;

SQL> EXECUTE DBMS_LOGSTDBY.GUARD_BYPASS_OFF;

• Query the DBA_LOGSTDBY_EVENTS view to find the XIDUSN, XIDSLT, and XIDSQN values for the failed DDL, and provide the values to the DBMS_LOGSTDBY.SKIP_TRANSACTION procedure. The failed DDL statement will always be the last transaction. For example:

SQL> SELECT XIDUSN, XIDSLT, XIDSQN FROM DBA_LOGSTDBY_EVENTS

2> WHERE EVENT_TIME = (SELECT MAX(EVENT_TIME) FROM DBA_LOGSTDBY_EVENTS);

SQL> EXECUTE DBMS_LOGSTDBY.SKIP_TRANSACTION( /*xidusn*/, /*xidslt*/, /*xidsqn*/);

• Start SQL Apply on the logical standby database.

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

When log apply services restart, they will attempt to re-execute the transaction that failed. If you do not want to re-execute it, provide the values to the DBMS_LOGSTDBY.SKIP_TRANSACTION procedure (see step 3 for an example) to skip the transaction.

Note: Do not let the primary and logical standby databases diverge when skipping transactions. If possible, you should manually execute a compensating transaction in place of the skipped transaction. Arbitrarily skipping transactions will most likely cause the SQL Apply operations to fail. Skipping DML can also be dangerous since you are skipping all the DML in the transaction and not just the failed statement.

Modifying the Logical Standby Database

Until you start SQL Apply operations on the newly created logical standby database, it is possible for users to modify the logical standby database. To prevent non-privileged users from executing DML or DDL statements on the logical standby database, use the ALTER DATABASE GUARD ALL statement before opening the database. Failure to do so could allow jobs to be scheduled and potentially modify tables in the logical standby database. Once the logical standby database is created completely, the guard will be set to ALL by default.

The ALTER DATABASE GUARD statement allows the following keywords:

• ALL—Prevents all users other than SYS from making changes to any data in the logical standby database.

• STANDBY—Prevents all users other than SYS from making DML and DDL changes to any table or sequence being maintained through SQL Apply operations.

• NONE—Allows any user access to all data in the logical standby database so long as the user has the correct security privileges. It’s highly unlikely you ever need to use this option.

With the database guard set to ALL by default, only user SYS can modify data.

There may be situations when it is useful to bypass the database guard, temporarily, to make changes to the logical standby database. For example, sometimes you may need to add an index or execute a statement that could not be replicated from the primary database to the logical standby database. If you are logged in as SYS, issue the ALTER DATABASE GUARD STANDBY statement to make DML and DDL changes.

If you are logged in as SYSTEM or another privileged account, you will not be able to issue DDL or DML statements on the logical standby database without first bypassing the database guard for the session. The following example shows how to stop SQL Apply, bypass the database guard using the DBMS_LOGSTDBY.GUARD_BYPASS_ON procedure, and then execute SQL statements on the logical standby database:

SQL> -- Stop SQL Apply if you plan to modify a table or sequence.

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

Database altered.

SQL> -- Disable the database guard, just for the session.

SQL> EXECUTE DBMS_LOGSTDBY.GUARD_BYPASS_ON;

PL/SQL procedure successfully completed.

SQL> -- Add an index to SCOTT.EMP.

SQL> ALTER TABLE SCOTT.EMP ADD CONSTRAINT EMPID UNIQUE (EMPNO);

Table altered.

SQL> -- Enable the guard for this session.

SQL> EXECUTE DBMS_LOGSTDBY.GUARD_BYPASS_OFF;

PL/SQL procedure successfully completed.

SQL> -- Restart SQL Apply.

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

Database altered.

Oracle Corporation recommends that you do not perform DML operations while the database guard bypass is enabled, as it is unlikely that you will be able to modify rows in a table in such a way that the logical standby database can incrementally maintain the rows.

Note: If you are running with the database guard set to the STANDBY option, privileged users must be aware that only standby database tables are being protected from being overwritten and many other objects are left unprotected. Privileged users must be careful not to DROP or CREATE any database object, including tables that are being maintained by the logical standby database. However, users without privileges cannot inadvertently perform DML operations. Thus, they cannot INSERT, UPDATE, or DELETE rows from standby tables, nor can users use sequences maintained by SQL Apply.

Changing Table Maintenance In a Logical Standby Database

Sometimes, a reporting application must collect summary results and store them temporarily or track the number of times a report has been run. Although the main purpose of an application may be to perform reporting activities, the application may need to issue DML (insert, update, and delete) operations on a logical standby database. It may even need to create or drop tables. You can set up SQL Apply and the database guard to allow reporting operations to modify data as long as the data is not being replicated by the logical standby database. Do this by:

• Removing primary database tables from being maintained by SQL Apply on the logical standby database

• Specifying the set of tables on the logical standby database to which an application can write data

• Setting the database guard to protect only standby tables

This setting modifies the list of tables that the logical standby database will maintain. For example:

SQL> -- Stop SQL Apply before modifying the set of tables being maintained.

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

Database altered.

SQL> -- Use the following SKIP procedure to indicate which tables those will be.

SQL> EXECUTE DBMS_LOGSTDBY.SKIP('SCHEMA_DDL','MYSCHEMA','SUMMARY_REPORT');

PL/SQL procedure successfully completed.

SQL> EXECUTE DBMS_LOGSTDBY.SKIP('DML','MYSCHEMA','SUMMARY_REPORT');

PL/SQL procedure successfully completed.

SQL> -- Start SQL Apply so that the changes can be applied to

SQL> -- the logical standby database.

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

Database altered.

SQL> -- Verify your changes are recognized by the full database.

SQL> -- This may take a while so the following query may need to be

SQL> -- repeated until no rows are returned.

SQL> SELECT NAME FROM DBA_LOGSTDBY_PARAMETERS WHERE NAME = 'EVALUATE_SKIP';

SQL> -- Instruct the database to allow updates.

SQL> ALTER DATABASE GUARD STANDBY;

Database altered.

The same steps can be used to add new tables to the logical standby database that do not exist on the primary database.

Possible Configurations

The best practice when configuring Data Guard is for the primary and standby sites to have identically configured systems. This will ensure that after a switchover or failover, the logical standby site containing the production role will have the capacity to handle the same load and provide the same level of fault tolerance. An identical standby site also allows for procedures, processes, and overall management to be the same between sites that are set up identically. The standby site is leveraged for all unplanned outages that are not resolved automatically or quickly on the primary site and for many planned outages when maintenance is required on the primary site.

Basic Logical Standby Configuration

Figure 4 shows a Data Guard configuration with a remote logical standby destination. In this configuration, the archiver on the primary system is archiving only to the local destination. The log writer process is sending redo data to the standby system at the same time it is writing the data to the online redo logs. The RFS process receives the redo data and writes it to archived redo logs on the standby database. A log switch on the primary database makes the redo available to SQL Apply and allows it to be applied to the logical standby database.

[pic]

Sharing a Redo Destination Among Multiple Standby Databases

Physical and logical standby databases complement each other and can be maintained simultaneously providing high-value data protection, while offloading work from the primary database. Physical and logical databases can be configured on the same standby system, sharing redo, where the redo data is transmitted once. This is known as a dependent destination.

Use the DEPENDENCY attribute of the LOG_ARCHIVE_DEST_n initialization parameter to define a dependent destination. This attribute indicates that this destination depends on the successful completion of archival operations for the parent destination.

Figure 5 shows a Data Guard configuration in which the primary database transports redo data to a physical standby, where the logical standby database can also use the redo for its apply services.

[pic]

Specifying a destination dependency can be useful in the following situations:

• When you configure a physical standby database and a logical standby database on the same node.

• The standby database and the primary database are on the same node. Therefore, the archived redo logs are implicitly accessible to the standby database.

• Operating system-specific network file systems are used, providing remote standby databases with access to the primary database archived redo logs.

• There are multiple standby databases on the same remote node, sharing access to common archived redo logs for staggered managed recovery operations.

In these situations, although a physical archival operation is not required, the standby database needs to know the location of the archived redo logs. This allows the standby database to access the archived redo logs when they become available for application. You must specify an archiving destination as being dependent on the success or failure of another (parent) destination.

Customer Case Studies

It is always interesting to see how others are using a new technology and for what purposes. Data Guard SQL Apply is no exception. Over the past year many customers have been investigating, planning and implementing Logical standby databases complementing their Physical standby setup. The following three sections outline two banks current production implementations of Data Guard using both Physical and Logical standby databases.

Bank Number 1 – A Web Site

This implementation of Data Guard uses both Logical and Physical standby databases to provide both DR and general external user access to the data. The data in this case is the content for the Internet web site for this Bank. Inside the firewall are the Production and its associated Physical standby databases. Outside the firewall are two Logical standby databases, which provide the content for the web site. This setup is shown in Figure 4 below.

The Web Application developers of the bank constantly update the content of the web site by adding or modifying the tables in the production database. The redo generated by these updates is sent to the two Logical standby databases outside the firewall where it is applied and becomes available for the web site. Internet users access the web site and find all the information about the bank including the bank’s services, eBanking information, Investor Relations and even job openings.

[pic]

Figure 4 – An External Web Site

Since there are two Logical standbys the web site is able to maintain load balancing by splitting the incoming users over both standbys. In the event that one of the systems fail the web site still functions using the remaining standby. If this happens then when the failed standby comes back on line it is resynchronized with the Production database automatically using the gap resolution capability of Data Guard. In this manner they are always ‘up’ on the outside. If there is a failure at the Production site the Physical standby can be brought online by failing over and it will continue to service the developers and send updates to the web site. By using this kind of implementation the bank was able to separate the developers from the user very easily thereby preventing any problems with the web site.

Bank Number 1 – Online Banking

The same bank has also implemented an on-line banking system to service their customer’s banking needs. In this case the Production and its Physical standby are outside the firewall where the users perform their banking on the Production database. The Physical standby provides data protection and rapid failover in the event they lose the primary site. Inside the firewall is a Logical standby that is used for reporting and general querying. Figure 5 shows the setup for this application.

[pic]

Figure 5 – An Online Banking Application

In this application the Production database must reside outside the firewall so that incoming Internet users can perform banking transactions. The Physical standby must also be kept outside so that if a failover is warranted the users can continue without any changes to the security setup that allows them to access the database in the first place.

The Logical standby database is another matter. The Internet users have no need to access the data kept in the Logical standby database since they access and maintain that data from the Production database. The internal users who wish to do reporting and querying of the data could access the Logical standby if it was kept outside however this would require that each user knows where the database is and also requires that more ports be opened so that they can access the data. With the Logical standby inside the firewall only Data Guard needs to send information (the Redo) through the firewall and the users do their work inside without sending any data over the outside to inside link.

Bank Number 2 – Branch Banking

In this Data Guard implementation (shown in Figure 6) the databases involved are all within the bank’s Intranet with no access from outside. The main users are the employees of each branch where they access the customer’s data in the Production database at a central server. Users who need to perform queries and reports on the production data have been moved off to the Logical standby. At a remote site the Physical standby provides DR protection and if necessary can easily become the Primary database (from a switch over or fail over) for the Logical standby as well.

[pic]

Figure 6 – A Branch Banking Application

The Logical standby database is currently on the same system as the Production database to facilitate management of the two open databases and prepare for the time when they will be able to move queries from the Branch users off the Production database onto the Logical standby as well as the reporting function. When that time arrives a Physical standby database could be created from the Logical standby (as a cascaded redo destination) and placed on the same server as the remote Physical standby database completing their DR strategy in the event the entire Primary site is lost and the Branch and Reporting users need to Failover.

New Features for Logical Standby in Oracle Database 10g

Many of the issues faced by customers implementing Logical standby databases in 9i have been either completely solved or, at the very least, improved upon considerably in Oracle Database 10g. The main issues usually are the work that has to be done to create a logical standby database, the lack of support for certain types of data and tables, the lag time for a Logical standby database to be caught up with the Primary database and general monitoring and management the DBA can and has to do. This next section discusses the following subjects and how Data Guard has improved them in Oracle Database 10g.

• Zero Downtime Logical Standby Database Creation

• Support for more Data Types

• Real Time Apply

• Flashback Database and SQL Apply

o Eliminating the need for a Delayed Standby and re-creating the Primary after Failover

• Improved and more Secure Ease of Use

Zero Downtime Logical Standby Creation

The first thing a potential logical standby customer runs into is the complexity and delicateness of creating their logical Standby database.  Not only is there a lot of up front work that has to be done, there are also lots of steps that must be followed to the letter for SQL Apply to function correctly.  In addition, to get a logical standby setup you had to either use a current cold backup of your primary database or attempt to quiesce the production database in order to get a consistent view of the transactions. The fact that a number of customers in spite of these drawbacks are and have been willing to undertake this adventure is a solid testimonial to the value of the Logical Standby technology and to the functionality of the current 9i version.

So, what will Data Guard SQL Apply in Oracle Database 10g bring to the table to make the life of the customer easier and make the procedure of setting up their logical standby databases more bullet proof?

• Zero Downtime Instantiation!

The ability to setup your logical standby without bringing down the Production database or even affecting the progress of the users and removing the dependency on the Resource Manager if you wanted to use a hot backup of the primary database.

How does it work?

Quite simply, in Oracle Database 10g a logical standby starts its life as a pseudo physical standby until it has reached the point where there are no in flight transactions, and then morphs into a full logical standby.

You still have to make sure the primary database can support a logical standby database to your needs and is ready to do so.  This includes checking for unsupported tables (a lot less now), checking uniqueness of the tables and turning on FORCE LOGGING, SUPPLEMENTAL LOGGING and, of course, ARCHIVELOG mode.  If you don’t have a password file you need to create one now as it will be required on both sides of your configuration. The following are the general steps required to create a Logical Standby database in Oracle Database 10g.

1. Take an on-line backup of your primary database. (For that matter you can take ANY backup of your Primary database, even last Sunday’s as long as you have the Archive Logs available to bring it up to date, automatically!)

2. Create a logical standby control file.  SQL Apply has its own control file now.  The syntax is the same as for physical with the keyword ‘PHYSICAL’ replaced with ‘LOGICAL’.  ALTER DATABASE CREATE LOGICAL STANDBY CONTROL FILE AS ‘LOGSTNDBY.CTL’; It is as simple as that.    A lot of the manual steps in 9i are automated and hidden behind this one simple command.  The dictionary build is started automatically, the start and end SCN numbers of the build are stored in the control file and the control file is marked as a Physical/Logical control file making the transformation to a full logical standby that much easier later.

3. Copy the backup files, the standby control file and the initialization parameter file (if you are using SPFILES you need to create a text one first) over to the standby system. You will notice that I did not mention copying the archive logs.  That’s because normal gap handling will fetch them automatically, they just need to be on disk and still in the primary database’s control file archive log list.

4. Setup the minimum initialization parameters to start redo transport from the primary to the standby and redo reception at the standby side.  At a minimum this means on the primary the ‘LOG_ARCHIVE_DEST_n’ parameters and on the standby the ‘STANDBY_ARCHIVE_DEST’, DB_FILE_NAME_CONVERT, LOG_FILE_NAME_CONVERT, FAL_SERVER, FAL_CLIENT and PARALLEL_MAX_SERVERS parameters.  You will also have to modify the CONTROLFILE parameter to point to your logical standby control file as well as any other parameters with pathnames in them, as usual.  You MUST create a password file for the standby now otherwise redo transport will not be able to function.  As usual you must also have a listener for this standby up and running and the appropriate TNSNAMES definitions on both systems..

5. Start and mount the Standby using STARTUP MOUNT. At this point you have a running pseudo physical standby.  To start redo shipping from the primary perform a log switch on the primary and start managed recovery on the standby with ALTER DATABASE RECOVER MANAGED STANDBY DATABASE; and wait for it to complete.  The MRP knows this is actually a logical standby by the SCN numbers stored in the control file and will perform point in time recovery to the dictionary build end SCN and stop.  If it is missing any archive logs (gaps) that it needs to bring the standby up to this SCN it will use the FAL gap resolution mechanism to fetch them automatically from the primary as usual.   That’s why they have to be on disk and still in the control file.  If they are not you will have to bring them over yourself and do some manual recovery of the standby first.

6. At this point you have recovered your pseudo Physical standby up to the point where the dictionary is present and all in-flight transactions are resolved and committed.  You are ready to ‘morph’ your standby into its full Logical self!  This is a single activate command, ALTER DATABASE ACTIVATE STANDBY DATABASE; Since you setup the pathname conversion parameters above, the data and log files will be correctly modified to their current location.

7. All that’s left to do now is to complete the transformation by resetting the database name and id using the DBNEWID utility.  This part of the procedure is the same as in 9i and includes a couple of shutdowns and startups open resetlogs, running the DBNEWID utility, fixing the parameter file (changing dbname for example) and of course recreating the password file.  You might also want to use this point to create the missing temporary data files too.

8. You are now running a bona fide logical standby.  Redo should be shipping again from the primary so start up the SQL Apply with ALTER DATABASE START LOGICAL STANDBY APPLY; without the ‘INITIAL’ keyword.  SQL Apply knows where to start from the same SCN in the control file that the MRP used.

Of course this is just an overview of the steps required to get a logical standby up and running.  As you can see there are fewer steps, no downtime for the Primary, and it is much more bulletproof.  For a complete detailed look at the procedure, as usual, refer to the appropriate chapter of the documentation.

Support for More Data

The lack of support in a Logical standby database for certain data types and tables has been improved in Oracle Database 10g, although there are still some restrictions.

For data types there is new support for the tables that contain columns with the following data types:

• LONG

• LONG_RAW

• Multi-byte CLOB

• NCLOB

• BINARY_FLOAT (New in Oracle Database 10g)

• BINARY_DOUBLE (New in Oracle Database 10g)

In addition to the new data types, Index Only tables are also supported but with some restrictions.

• No LOB columns in the IOT

• No IOT with Overflow

Real-Time Apply

In the current release of Data Guard both Redo Apply and SQL Apply standbys will not process and apply redo that is sent from the Primary database until a log switch has been executed at the Primary database. This has no impact on the level of data protection gained by having the standby since the redo is off of the primary and safely somewhere else. However it can delay the amount of time required to perform a failover or switchover to a Physical standby or the availability of current up to date data in a Logical standby. If desired (and configured) with Oracle Database 10g the redo data is applied to the standby database as it is received from the primary database. This means that the data is in the standby database before a log switch has occurred at the primary database. For a physical standby this means that failover and switchover do not have to wait for the current redo to be applied before completing the operation. For a Logical standby the availability of the data is much faster and queries will return data that was just committed on the Primary with much less lag time.

The use of Real-Time apply is a configurable option that can be used at any or all of your standbys and is done using new attributes to the existing apply engine syntax.

• For SQL Apply:

o ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

• For Redo Apply:

o ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

o

Flashback Database and SQL Apply

One of the biggest problems today after a failover is that the original primary database must be recreated as a standby to return your configuration to its original setup. This means copying all of the datafiles from the newly failed over Primary back to the original primary system. This can mean a lot of work and time without a standby. Of course this assumes that the failure encountered did not actually damage the original database.

The following steps assume that the user has performed a failover and that Flashback database had been enabled on the old primary database. Without restoring the old primary, this procedure brings the old primary back into the Data Guard configuration as a new standby.

1. On the new primary database, find out the SCN at which the old standby database became the new primary database, via the following query:

SELECT value FROM dba_logstdby_parameters where name = 'END_PRIMARY_SCN';

2. After the old primary site becomes available again, mount the old primary database. Flashback the old primary to "standby became primary SCN " discovered in step 1.

3. Set the database guard on.

ALTER DATABASE GUARD ALL;

4. Open the old primary with resetlogs

ALTER DATABASE OPEN RESETLOGS;

5. Restart SQL apply by pointing it to the new primary database

ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY ;

6. On the new primary enable log transport to the old primary (new standby).

In this manner only the SQL Apply dictionary of the new Primary needs to be retrieved (automatically!) and removes the requirement of copying all of the data files across the network.

An additional use for Flashback Database and SQL Apply is that a delay on the standby would not be necessary if Flashback was enabled on the standby. In the event that some human error occurred on the primary the standby could be ‘flashed back’ to before the error and then a failover performed reducing downtime to recover from the error and allow the Logical Standby to be used with Real-Time apply providing queries and reports access to up-to-date data.

Secure Switchover

The act of switching roles between a Primary database and either a Physical or Logical standby database is pretty much the same today for the DBA but the work that goes on behind the scenes is quite different.

With a Physical standby the users have to be logged off the database and the final bit of redo is sent over to the standby notifying it that a switchover has been started. Once received and applied at the Physical standby the MRP (apply engine) stops and the standby is ready to become a Primary database.

A Logical standby database must do much more work since the current Primary is not yet a ‘Logical’ standby database and has no real knowledge of what the current Logical standby looks like. A new dictionary build must be performed on the new Primary and that redo sent to the old Primary, now the new Logical standby. Until the redo containing the new dictionary is received at the new Logical standby, it cannot apply the redo coming in from what is now the Primary database. This leave a very small window in which some redo might be sent to the standby that cannot be recovered in the event that the new Primary fails. This is a very small window generally but a window just the same. Data Guard in Oracle Database 10g provides a new PREPARE command that closes this window completely. Now a switchover will be performed as follows:

On the current Primary you first perform the prepare command.

ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY

This informs the primary that it will become a logical standby and can start receiving redo from the current logical standby. Once done the second Prepare command is executed on the logical standby that is to become the Primary database. This tells the standby to start the dictionary build and ship the redo to the current primary in preparation for the switchover.

ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY

When this command completes and you are returned to the SQL prompt the dictionary is built and safely sent to the current Primary database. Now you can proceed with the normal switch over to complete the operation.

On the current Primary database:

ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY

On the Logical standby database:

ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

And the process is complete. Finally start the SQL Apply engine on the new Logical standby with

ALTER DATABASE START LOGICAL STANDBY APPLY

And the dictionary will be processed and all new redo applied correctly.

It should be noted that if there is a physical standby in the configuration the switchover should be performed between the Primary and the Physical standby since the Logical standby will not know the difference between the two and continue to operate normally. If the switchover is done between the Primary and the Logical standby the Physical standby becomes a cascaded standby form the new Logical standby database and not a Physical standby of the new Primary.

Monitoring the Logical Standby Database

Setting up a Logical standby requires knowledge of what tables will not be supported at the standby side. In the current version of Data Guard the DBA_LOGSTDBY_UNSUPPORTED view did not always explain exactly what was wrong with a certain table. For example, a table that is an Index Only Table would show up as unsupported but upon examining the columns in the table you would not see any unsupported data types. A new column, called ATTRIBUTES, has been added to the view to fully explain what is wrong with the table. In this manner you will see immediately that an unsupported tables has table compression or uses an unsupported IOT without having to look at the columns.

SQL> SELECT DISTINCT table_name, attributes

2> FROM dba_logstdby_unsupported

3> WHERE owner = 'HR';

TABLE_NAME ATTRIBUTES

------------ ---------------------

COUNTRIES Index Organized Table

DEPARTMENTS Table Compression

EMPLOYEES Table Compression

JOBS Table Compression

JOB_HISTORY Table Compression

LOCATIONS Table Compression

REGIONS Table Compression

7 rows selected.

Another important function that a DBA performs on a regular basis is monitoring the progress of the SQL Apply engine through the archive logs and transactions. This is important to ensure that the Logical standby is keeping up to date and has not run into something that must be manually resolved, a new data file on the Primary for example.

The DBA_LOGSTDBY_LOG view has been updated to show which archive logs have been completely applied and can be safely deleted if necessary.

SQL> SELECT thread#, sequence#, applied FROM

2> dba_logstdby_log order by sequence#;

THREAD# SEQUENCE# APPLIED

---------- ---------- -------

1 48 YES

1 49 CURRENT

1 50 CURRENT

1 51 CURRENT

1 52 CURRENT

1 53 CURRENT

6 rows selected.

The DBA_LOGSTDBY_PROGESSS view has also been updated to include more information on the current progress of SQL Apply through the redo stream.

Using the new columns in DBA_LOGSTDBY_PROGRESS you can see more details on the progress of the SQL apply service in your standby database. All columns are the NUMBER data type.

• APPLIED_SEQUENCE# Sequence number for a log containing the APPLIED_SCN.

• APPLIED_THREAD# Thread number for a log containing the APPLIED_SCN.

• READ_SEQUENCE# Sequence number for a log containing the READ_SCN .

• READ_THREAD# Thread number for a log containing the READ_SCN.

• NEWEST_SEQUENCE# Sequence number for a log containing the NEWEST_SCN.

• NEWEST_THREAD# Thread number for a log containing the NEWEST_SCN.

SQL> SELECT applied_scn, applied_thread#, newest_scn,

2> newest_thread#

3> FROM dba_logstdby_progress;

APPLIED_SCN APPLIED_THREAD# NEWEST_SCN NEWEST_THREAD#

----------- --------------- ---------- --------------

961263 1 961263 1

When using REAL TIME APPLY, the apply values may be greater the newest. This is expected and normal behavior.

Managing SQL Apply

Finally managing your Logical standby database, though necessary, does not have to be complex, whether it is deciding on what tables in the Logical Standby database will be maintained, bypassing the standby guard to add other objects (new tables or indexes on current tables), or restarting the apply engine after a failed transaction. The latter two have been transformed from running a package to simple to use SQL commands.

Previously the wildcard features in SKIP procedure did not allow for some tables to be skipped without skipping other tables. For instance if you attempt to skip a table called MI_DATA you would also skip MINDATA. Now with these additional options you will be able to better control what is skipped.

DBMS_LOGSTDBY.SKIP(stmt,schema_name,table_name,proc_name,use_like,esc);

• use_like – should a wildcard pattern match be performed. Default is true. Set to False to use the escape character.

• esc – specifies what escape character is being used in the pattern matching.

Bypassing the Standby guard (the security that prevents users from changing the standby database or, at least, the objects maintained by SQL Apply) required executing a package to allow your session to perform modifications to the Logical standby database. Now you will be able to turn the guard off and back on with a simple ALTER DATABASE command.

SQL> ALTER SESSION DISABLE GUARD;

SQL> ALTER SESSION ENABLE GUARD;

And last, but definitely not least, restarting the SQL Apply after it has stopped due to some problem that has been corrected, will no longer require finding out what the failed transaction’s id is and executing yet another package to force the transaction to be skipped when SQL Apply restarts. Now this functionality is merely a few extra words on the start command.

ALTER DATABASE START STANDBY APPLY SKIP FAILED TRANSACTION;

Be very careful when skipping transactions. Generally speaking skipping a DDL operation is fine as long as you are able to reproduce it manually. But if you skip a DML operation you may make your Logical standby unusable.

Rolling Upgrades

In Oracle Database 10g, Data Guard provides the foundation for performing rolling upgrades of the Oracle database software from the first release of Oracle Database 10g onwards with minimal interruption of service. This will be accomplished with the use of a Logical Standby database and switching over between the Primary database and the Logical standby. The follow slides so the basic proposed steps for performing this RDBMS rolling upgrade.

Initial Configuration

As with the banks described at the beginning of this paper, the initial setup might look something like that shown in Figure 7 below.

[pic]

Figure 7 – Step 1

You configuration consists of the Production database where the users are currently attached and a Logical standby which can be remote or local. All of these databases are running the initial version, which we will call Version X.

Step 1 – Upgrade the Logical Standby

The first step is to stop shipping redo to the Logical standby and upgrade the Oracle software and database at the Logical standby site to Version X+1.

[pic]

Figure 8 – Step 2

The redo is stacked up waiting for the Logical standby to return. This has no impact on the performance of the Primary database since it is a controlled action. Once the Logical standby database is upgraded, normal testing can be performed to ensure that it functions correctly. The testing would be Read-only on the objects being maintained by SQL Apply but could be Read-Write on any other objects.

Run in a Mixed Environment

You can now re-enable the redo shipping from the Primary database to the Logical standby (running in Version X+1) and the archive logs that were not sent yet will be automatically shipped to the standby and applied.

[pic]

Figure 9 – Step 3

You can run in this mixed version environment for as long as necessary until you are confident that everything works as you expect.

Switch over to the Logical Standby

When ready you can perform a switch over and move the users (and applications) from the current Primary database to the newly upgraded Logical standby database. This requires a short period where the users cannot update either database.

[pic]

Figure 10 – Step 4

At this point transaction will be running on the new Primary database, which is running in Version X+1. The original primary database is now a Logical standby. However, since you are not shipping the redo to the original database you may want to create a second Physical standby database from the new Primary for security. This can be done even before the switchover occurs.

Upgrade Original Primary and Physical Standby

With the users and transaction executing on the new Primary you can now upgrade the original Primary, which is now a Logical standby.

[pic]

Figure 11 – Step 5

This is done by shutting down the database and upgrading the Oracle software. Once that is complete you would perform the upgrade of the original Primary database (on Node One).

Resynchronize The Original Primary

Now you can re-enable the destination on the new Primary and all of the redo that has been created since the switchover will be sent to the Logical standby (on Node One) and applied, bringing it up to date with the Primary.

[pic]

Figure 12 – Step 6

Switch Back to Original Configuration

Now all that is left to do to return to your original configuration is to perform another switch over from the Primary (on Node Two) to the Logical standby (on Node One).

[pic]

Figure 13 – Step 7

At this point all of the databases have been upgraded to Version X+1 with minimal interruption of service.

Summary

SQL Apply technology boosts the flexibility of the data protection and usage of of Data Guard Data Guard through the support of Llogical standby databases. With logical standby databases, you can have the database available for reporting and be applying the logs to the standby at the same time, allowing the logical standby database to be used for other business activities while still maintaining a high level of Disaster protection by combining logical standby databases with physical standby databases.

Data Guard in Oracle Database 10g extends the functionality of SQL Apply to enable Logical standby databases to serve a wider range of applications.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download