Re: Best practice for Dataguard in 10g?

From: Ahbaid Gaffoor <>
Date: Sun, 06 Sep 2009 02:13:29 -0700
Message-ID: <>

I've been using FSFO for about 8 months now in Production on some heavy redo producing systesm

I've used it for (hit a few bugs), with a few patches was stable, and I've been running it on a few systems.

I agree that deciding when to failover is good to control, but having a system failover to another in under 40 seconds (that's my worst case, best case was 10 secs) on a hevily used prod system, without any manual intervention was incredible. Even if you did the steps manually, you can't beat it. I was able to get a db failoved over to another datacenter in a power loss event in 35 seconds using FSFO..

so I'd recommend it from upwards

I still have to dig into the FSFO specific 11g enhancements, but I do know you can control failover programatically now.



Nuno Souto wrote:
> Ahbaid Gaffoor wrote,on my timestamp of 3/09/2009 1:22 AM:
>> Are you planning on using Fast Start Failover?
>> If you are then I'd recommend not doing this unless you were on
>> with all patches applied for FSFO
> No way I'm using FSF. Last thing I need is the
> database deciding it should fail over!
> I'll decide that, thank you very much.
>> Some of the bugs do not show up unless your redo generation rate is
>> high, high being around 2M of redo per second, or 200G in 24 hrs
> Yeah, I know. We do 500GB/day, in spurts. That's
> why I am concerned.
>> If you can go to 11g then you can also investigate dataguard
>> compression, and some of the more configurable failover options
> No can do. DW software we use is not certified
> for 11g, yet. Maybe next year.
>> I have not seen any issues specifically due to using LGWR vs. ARC for
>> shipping, using LGWR puts you in better shape (IMO) for setting up
>> FSFO should you need it.
> Thanks. I'm leaning towards using LGWR
> at the moment as I'm on and patched up.
> All feedback seems to indicate it's OK at that
> release level.

Received on Sun Sep 06 2009 - 04:13:29 CDT

Original text of this message