Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle 9i R2 standby database

Re: Oracle 9i R2 standby database

From: koert54 <nospam_at_spam.com>
Date: Mon, 24 Feb 2003 13:53:08 +0100
Message-ID: <3e5a1578$0$2180$4d4efb8e@news.be.uu.net>


The first questions I would ask myself is : - what is the bandwidth and latency of my network between the 2 machines - how much redo is being generated

Me and a couple of collegues do not use logshipping through arch/lgwr processes
anymore because it slows down the primary DB on slower networks. Even with the so called
physical standby database (the old way) with log shipping through ARCH and a large amount of log groups. Unless we're working on at least a 10Mb/100Mb/1Gbit
network depending on redo generation.

Anything less - 512Kbit - 2Mbit etc - use your own custom scripts to compress/ship/uncompress/apply.

The problem with slower lines is that ARCH will open a file descriptor on the online redolog
that needs to be shipped over the network. Now if your network is slow and a lot of redo
is being generated, chances are high LGWR needs that online redolog for re-use before ARCH
is finished with it. The only solution is to make as many redo log groups as needed to prevent
this during your piek activity - but even then it's quite a risk.

The solution to this problem is quite obvious and I hope Oracle will change this mechanism in 10i (there's
a request for change on http://ers.oracle.com/) (for physical standby that is) - do not use the online
redologs for shipping, instead use the offline redologs and don't let ARCH do it but another external process
(and add some compression !)
YES - with physical standby the gap between primary and standby could become larger but at least my
primary database will not suffer performance loss.

So unless you have fiber between your 2TB OLTP DB I would think twice about the consequences regarding
performance :-)
Heck - last month I even saw an Oracle consultant throw out DG and use their own custom perl scripts on a 9iR2 DB :-)

"Roger Jackson" <rjackson1_at_hotkey.net.au> wrote in message news:3e5a0058_1_at_news.iprimus.com.au...
> Hi,
>
> Our site is looking at possibly using Data Guard with Oracle 9i R2.
> I was wondering if you could answer some questions before we make any
> further decisions.
>
> I'd like to concentrate on the two flavours of Data Guard available in
9.2:
> redo log apply and sql apply:
>
> - What (if any) performance implications are there between the two
options?
> (on both the source and target servers).
>
> - Is SQL apply the only method which allows read access to the target
> database?
>
> - Is there anyone out there using sql-apply with large (2TB) OLTP
databases?
>
> - How does the failover process work (for both models), and how would we
> fail back to the original source server?
>
> - Does either option support RMAN backups of the target database?
>
> - In the event of a media recovery of the source database, how do we
> resyncronise the target (for both models)?
>
> - Has anybody come across problems using DataGurard on AIX 5L (5.1)?
>
> Thanks in advance.
> Cheers,
>
> Roger
>
>
>
>
>
>
Received on Mon Feb 24 2003 - 06:53:08 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US