Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Logical Standby Issues (cont.)

Re: Logical Standby Issues (cont.)

From: Mark Strickland <strickland.mark_at_gmail.com>
Date: Thu, 27 Jul 2006 09:04:04 -0700
Message-ID: <90ad14210607270904v603da0cfy956a2c03975bfa35@mail.gmail.com>


I will answer the last three responses with a single response:

  1. I originally thought that SQL Apply was doing full table scans but that was because I was running a currently-executing update statement through Explain Plan, which indicated a full table scan. After getting the actual execution plan from v$sql_plan, I could see that index unique scans were being used, as I would have expected. given that the table has a primary key.
  2. Of the two tables that got a new column with that column being set equal to an existing column for all rows thereby resulting in millions of individual updates in the logical standby, the first table has two FKs but the columns are indexed. The second table from this past weekend's DDL-with-update has no FKs.
  3. The parameters that Oracle Support had me change in my test environment were set as follows:
SQL> exec dbms_logstdby.apply_set ('_EAGER_SIZE', 2000);
SQL> exec dbms_logstdby.apply_set ('_MAX_TRANSACTION_COUNT',12);
SQL> exec dbms_logstdby.apply_set ('MAX_SGA', 3/4 of your shared pool);

I chose 500 as the setting for MAX_SGA (up from the default of 30M). It is my belief that the significant increase in MAX_SGA accounted for the most dramatic effect in performance. Before, I could see that SQL Apply was constantly paging out to a LOB in the SYSAUX tablespace.

4) I have parallel_max_servers set to 9.

In my test environment, it's taking about 4 minutes for SQL Apply to get through each 10-Mb archived log. If we can't tune that any better, in Production we'll have to re-instantiate large tables that get mass updates rather than let SQL Apply chew through the millions of update statements. Our largest table (that is maintained by SQL Apply) is 12-Gb which takes about 4 hours to re-instantiate. Inconvenient but do-able.

I might have anticipated how SQL Apply (via Streams) would handle a mass update if I had had better insight into the contents of the redo stream. I've just never taken the time to figure all that out. Our site is new to Logical Standby, so we're climbing the learning curve. Other than this nasty little surprise and the discovery that Change Data Capture doesn't work in 10.1.0.3, Logical Standby has been quite reliable since we implemented it in Production on June 10th. We're not yet reliant on the logical standby for reporting, so the impact of the surprises has been manageable. We'll be at 10.1.0.5 in Production in October, at which time, I expect that we will implement Change Data Capture and migrate the bulk of our reporting load over from the primary.

Thanks to all who offered ideas, insights, etc.

Mark

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Jul 27 2006 - 11:04:04 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US