Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Extracting Redo Log Info/LogMiner/Shareplex

Re: Extracting Redo Log Info/LogMiner/Shareplex

From: Pete Sharman <peter.sharman_at_oracle.com>
Date: Thu, 27 Sep 2001 08:32:32 -0700
Message-ID: <uMHs7.14$78.3230@inet16.us.oracle.com>


Provided the ROWID hasn't changed (due to export/import or similar), you can always look up the row in the database by using the ROWID provided in the SQL_UNO/SQL_REDO columns. Unfortunately, I don't know of any way to get this information the way you want it in 8i if the ROWID has changed. After all, that's why the enhancement came in 9i. However, maybe someone else can think of a way to do this.

Regarding the tables that don't have primary keys, how else can you uniqeuly identify the rows? It has to be through a primary key or the ROWID - no other choice. Without starting a religious war (I know there are situations where this would be a valid config!), this would seem to me to be more of a design flaw in the application than anything else.

Data Guard in 9i is not a product that I think of as a replication product (Advanced Replication is the Oracle methodology for this). However, in some ways the new levels of protection can provide statement-level replication I suppose. Let me explain. Data Guard is the enhanced version of the automated standby product in 8i (in fact it was actually backported to 8.1.7 for some platforms, but I'm not sure how much of the new functionality was backported, so let's leave that out of the picture for the moment). Data Guard has four levels of protection:

  1. Delayed protection - same as 8i. Redo logs are automatically shipped from the primary to the standby site, where the standby is mounted in recovery mode. Similar in many ways to log based replication.
  2. Rapid protection - LGWR slaves send redo vectors to the standby. No guarantee that the modifications will be available on the standby. By its very nature, the fact that we use LGWR slaves means that this is still an asynchronous mechanism.
  3. Instant protection, or no data loss mode - the LGWR process sends the redo records to the standby. No commit on the primary until there is confirmation that the record is available in at least one standby database (there can be up to 10 standbys per primary in 9i). Mechanism is therefore synchronous, and should only be considered for sites that have a robust network.
  4. Guaranteed protection - for those really, really paranoid sites where data absolutely MUST be available on both the primary and the standby. LGWR still sends the redo records to the standby, but in this case there is an additional guarantee of no data divergence. Standby can't diverge from the primary at all, so if a standby is unavailable, processing stops on the primary as well. As I mentioned, only for those very few clients that are really, really concerned that data must be available in both sites.

Options 2, 3, and 4 require additional standby redo logs to function. These should be created on both the primary and the standby, even though they aren't used on the primary. This allows for graceful switchover and switchback with the least DBA intervention.

Now going back to your question of how Data Guard deals with redo records for tables without PK's, it's exactly the same as it is in writing the information into the primary site redo logs. It's just written in two places now.

As for the performance of trolling the whole redo log, sorry I didn't pick this up more clearly the first time. Performance is one of the things that we always work on, so it should get better with each release. One option that's available to you I suppose is to shrink the redo logs - only joking! By the sound of what you're doing, I don't think LogMiner was originally designed to continuously process new data. It's more of a tool to use in an adhoc manner. Having said that, we are always finding that customers find new and exciting ways of using functionality that we haven't thought of when developing the product, so maybe there's some work happening to improve the speed of this operation to. I think the problem is that log files don't have an equivalent to a datafile header to allow quick access directly to a particular set of records, but I may be wrong in that.

--
HTH.  Additions and corrections welcome.

Pete
Author of "Oracle8i: Architecture and Administration Exam Cram"
Now got a life back again that the book is released!

"Controlling developers is like herding cats."
Kevin Loney, Oracle DBA Handbook

"Oh no, it's not.  It's much harder than that!"
Bruce Pihlamae, long-term Oracle DBA

"Eric" <bytewarrior2000_at_yahoo.com> wrote in message
news:497b38b.0109261114.60bfa81b_at_posting.google.com...

> Thanks for the info, Pete. I've got a few follow up questions.
>
> We actually need the original rowid for a couple of reasons. We'll be
> going to 9i in some places, stuck indefinitely in Oracle 8i for other
> applications. Therefore we can't rely on supplemental log information
> allowed by 9i.
>
> Also, once we have rowid, we use it to look up other information in
> the row which may not have changed in the update (e.g. we still need
> to know a customer's account number even if only his balance changed
> in a particular transaction). Obviously, if the row is migrated,
> logminer won't let us do that currently. Perhaps there's a way to use
> the current location of the row (physical address) in a query (select
> * from table where PHYSROWID = x?).
>
> The other reason is that there may not be a primary key defined on the
> table, and no way to get a particular row without knowing the rowid.
>
> Wondering how Oracle 9i Data Guard deals with this if it's doing
> statement-level replication on these types of tables? Or is there a
> restriction which mandates "insert only" unless there is a unique key
> to use???
>
> Regarding LogMiner, START_SCN is allowed as you state. However, this
> appears to be a filter against the entire log, rather than a way to
> position intelligently into the log. In other words, the input is the
> same, but the output is restricted to the specified range. The
> problem therefore remains that we have to perform tons of IO in order
> to get the last few records.
>
> Eric
Received on Thu Sep 27 2001 - 10:32:32 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US