Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> FW: Some Dataguard is good, lots more must bebetter?(specifically, when do most actual failovers really occur?)

FW: Some Dataguard is good, lots more must bebetter?(specifically, when do most actual failovers really occur?)

From: Laimutis Nedzinskas <>
Date: Thu, 21 Sep 2006 10:27:18 -0000
Message-ID: <>


From: Laimutis Nedzinskas
Sent: 21. september 2006 10:26
To: 'Carel-Jan Engel'
Subject: RE: Some Dataguard is good, lots more must bebetter?(specifically, when do most actual failovers really occur?)  

From: Carel-Jan Engel []

On Thu, 2006-09-21 at 08:46 +0000, Laimutis Nedzinskas wrote:

        No, I do not confuse. I just was not 100% sure if Oracle can do it because I've never tested it myself.         

Your phrase 'Well, it is not a good option for maximum data protection(as Oracle defines it.)' is misplaced then. You don't know, you haven't tested whether it is a good option. Stating an untested assumption as a fact is not right. I did test this a lot. It actually works.  

Agreed. I hope not too much harm was done.  

        The point is I've never used this option is that together with Data Protection one wants High Availability which means that time lag is contradicting this requirement. In numbers if 15 minutes downtime is allowed then recovery must be 15 minutes.         

No. I have never seen (which doesn't meen it isn't possible) recovery lasting as long as the timeframe spanned by the redo to be applied. In general 15 minutes worth of redo does not take 15 minutes to apply.  

Yes, this is my experience too. However if you have a few hours of lag then I am not sure how to calculate the upper bound of redo log apply time which usually varies from a few minutes to 15-30 minutes (15 minutes is a coffee break, 30 - 2 coffee breaks, something that normal business can tolerate)

        I am not sure how to calculate maximum lag allowed as it depends on machine speed and redo size and probably redo contents.         

The maximum lag allowed should be business driven. How much time does business allow themselves to discover a logical error? How much time do they allow you to do the same? The time it takes to apply the amount of redo for that a timeframe can only be determined by testing. How much redo is generated at most during such a timeframe? How much time does it take to apply that amount of redo? That depends mainly on your CPU, storage abilities. Frequently I see 8 hours worth of redo being applied in a handful of minutes. This is not a very idle system, BTW. Your Mileage May Vary. TEST!
Again, this is why many organisations tend to install two standbys, once the decision for installing a standby is made.  

2 standbys is my choice too. After all, good sleep costs. Testing is not enough, not for me at least. I prefer proofs.  

If your business really cannot afford an outage of say, > 4-6 hours, 2 standbys are required IMHO.  

4-6 hours would be a disaster for the business I happened to work for in that last 8 years.  

And that 2 standby thing brings me back to the point the OP had in mind when he started with this thread. Storage replication versus Data Guard.  

Well, recently I had to think a lot about that in terms of HA (data protection was part of HA in my case too meaning that the solution must provide 100% of committed data after the disaster)  

First of all: Let's say why the hell to go into storage solution or DG? Why not to go into triple RAID 10? If a single disk fails then 2 more are left.
Why not to go into cluster(non parallel): if one box fails then the other is available with all it's functionality: CPU, RAM, network cards, etc.  

So far I arrived into this:  

Best regards,

Carel-Jan Engel

If you think education is expensive, try ignorance. (Derek Bok) ===


Received on Thu Sep 21 2006 - 05:27:18 CDT

Original text of this message