Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Help: Standby database config.

Re: Help: Standby database config.

From: Mark Townsend <markbtownsend_at_home.com>
Date: Sun, 01 Jul 2001 21:36:26 GMT
Message-ID: <B764E5B9.5CC8%markbtownsend@home.com>

Human errors actually cause 70% of data loss - disasters only around 4%. For this reason Dataguard in Oracle9i has been specifically expanded to address human error scenarios, avoiding the need to go to time consuming and cumbersome point in time recovery scenarios.

Oracle9i supports delayed apply of the redo logs for exactly this type of situation - and in this scenario, the log propogation would be set up in zero loss mode (i.e sent in near real time, not on archive), but the standby waits a pre-configured period of time before they are applied.

This means that any errors - human or otherwise, can be stopped from being applied. However, all other comments in this thread are also true - a standby that is not up to the minute is not too useful for Disaster Recovery.

So that's why Oracle9i supports up to 10 standby destinations and multiple standbys off one set of logs - perhaps 1 remote with zero loss propogation and apply, and another couple of standbys (not necessarily remote) that are 30 minutes and even perhaps 8 hours behind the production, protecting against human errors and corruption. Or two or more standbys on the same remote box - one up to date for DR failover, and one lagging for immediate recovery of dropped or lost data, both feed of the same set of redo logs from the production environment.

Note also that redo log corruption, though rare, can be a major pain to work around - Oracle9i Dataguard will detect corruption in the redo logs on apply, and can bet set up to halt the production system accordingly - for some, this is enough of a reason to set up a standby in itself.

And finally, rolling forward a 30 minute delay on a single standby may be much quicker to open than going to backup on primary.

in article 3b3f9009_at_news.iprimus.com.au, Howard J. Rogers at howardjr_at_www.com wrote on 7/1/01 2:02 PM:

> Exactly how is not applying the logs automatically going to help you in this
> situation?
>
> What I mean is, if you detect that a drop table has happened within minutes
> of it happening, then the redo for that will likely be in your main
> database's current redo log. Hence it won't have been archived, and hence,
> the standby won't have been 'tainted' by it. Whatever method of applying
> redo logs you went for, your standby would still be OK.
>
> But if the drop table wasn't detected until some considerable time had
> elapsed, then (a) in automatic managed standby mode, you standby suffers
> from the same problem or (b) in manual standby mode, your standby is
> hopelessly out of date (and/or suffers from the same problem as well!)
>
> Since you can't skip archives, you either apply the one with the drop table
> command in it, or you forfeit your ability to apply any others *after* that
> log -hence the standby gets more and more out of date.
>
> And unless you propose only to manually apply the logs at the end of the
> day, after doing exhaustive checks to make sure that there are no problems
> with the production system whose replication to the standby would be
> undesirable, this really isn't a very workable proposition. Without those
> checks, you'll be just as likely to apply a piece of redo containing
> something very unfortunate as you would have done using automatic managed
> standby mode.
>
> Standby databases are meant to protect you from disaster. User errors,
> however inconvenient, are not disasters, and ordinary incomplete recoveries
> are your 'way out' of them. I think it an extremely bad idea to confuse the
> two, and your friend is giving you very poor advice.
>
> Regards
> HJR
>
>
> <u518615722_at_spawnkill.ip-mobilphone.net> wrote in message
> news:l.993988800.1535369873_at_adsl-151-197-238-2.phila.adsl.bellatlantic.net..
> .

>> A friend of mine suggested me to config our standby database
>> withoutusing managed recovery because of the following reason:
>> If you have corruption or bad data get into the online db you have no
>> way to easily spot and stop that data from automatically migrating into
>> the standby db.  If you keep everything the same but only apply the
>> recovery on the failover side every 2 hours or so you then have 2 hours
>> to catch and stop the recovery.  like recover to a point in time just
>> before the corruption.  The way you are doing it now with managed
>> recovery I don't think it buys you much failover capability.
>> 
>> Is this really a good idea?
>> 
>> 
>> 
>>> Not in my book. A stand-by database that can't be brought up in a
>>> minute or two isn't a stand-by database. Exactly what form of
>>> corruption is your friend thinking is going to occur in the redo logs?
>>> Daniel A. Morgan
>> 
>> What he means is that if somebody drop a table mistakenly or something.
>> If we just let oracle write all the logfiles to the standby destination,
>> but only apply 2 hour behind of them, when we need to bring up the
>> standby database, we can apply the rest of it.  Otherwise, we can use
>> it for recover until time.
>> 
>> thanks
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> Sent by  dbadba62 from hotmail subdomain  of com
>> This is a spam protected message. Please answer with reference header.
>> Posted via http://www.usenet-replayer.com/cgi/content/new

>
>
Received on Sun Jul 01 2001 - 16:36:26 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US