Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Some Dataguard is good, lots more must be better?

RE: Some Dataguard is good, lots more must be better?

From: Carel-Jan Engel <>
Date: Tue, 19 Sep 2006 23:17:08 +0200
Message-Id: <>

On Tue, 2006-09-19 at 10:35 -0700, Kevin Closson wrote:

> >>>alternative DR solutions. For people that do care about near
> >>>continuous availability DG is really quite cheap - even if
> >>>you didn't buy EE and is (reasonably) easy to manage. It
> >>>helps that it is a somewhat old and reliable technology as well.
> ...the thread I started was about the practicality of using DG
> for a lot of databases.

Apologies, we ran off track.

> One of our accounts has over 80 databases that need DR and
> I must say that in my mind chewing on crushed glass would
> bring more pleasure than trying to deal with 80 primary/standby
> DG relationships... It just seems to me that at some number
> of databases, the only humanly possible way to get DR would
> be at the storage level... thoughts ?

I don't think so. You're seem to be thinking in solutions, I prefer to think in requirements. What recovery requirements do you need to cover for? If that includes the ability of backing out from an upgrade (see my first post in this thread) I think DG is very useful, I doubt whether storage replication can cover for that. Can you switch one database from main DC to DR DC, and have that replicating back from DR DC to main DC, whilst the rest is replicating from main DC to DR DC? How easy is that? Can the DBA initiate that him/herself, or is there the need of getting Storage Admins and/or System Admins involved? I dislike dependencies like that. It complicates the environment, and on Miracle's website I saw a quote from a well known and much respected member of this list: 'Complexity is the enemy of availability'. That leads to another question: Is Data Guard complex? I don't think so.

I have one customer with appr. 20 databases with DG. Setting up DG is a piece of cake, and included in creating the database. It takes 10-15 minutes to fiddle with the init.ora's and then one commandlineto fire the script that creates standby controlfiles, standby redologfiles, copies the database (hot backup), starts standby, adds the temp files, and puts the whole thing in managed recovery mode. Of course there is a delay of 4 hours implemented. It's not difficult al all (not implemented yet at this site) to have monitoring guarding the whole redo apply, and warns by pager/email when tresholds in recovery are crossed. I had this type of monitoring implemented at another site (appr. 8 databases withDG) that used Big Brother (replaced now by some home grown monitoring tools)

No hard management at all, great DBA controlled features, slick fall back features because of DELAY and 'unplugged' upgrades.

I have customers that have pretty junior DBA's or, just above that level. They manage to manage DG environments with my scripts, even with multiple standby's. Another nice one: How easy is it to have multiple DR sites with storage based replication? appr. 60-70% of my customers run a standby on the main site (synchronous) and a second standby at a remote site. An ISP runs their email systems in two datacenters. Half of the country in one DC and the other half in the other DC. The standby servers in both DC's are local standby as well as remote standby for the other DC. With 4 servers they run a nice scalable and robust setup.

But that is all 'proof' of that it is possible. I have other customers, that choose for storage replication. They want just one solution for all replication issues. There have been outages there. These have lasted several hours, up to appr. 24. They didn't perform a DR, because it would have such a vast amount of repercussions. It was an 'everything or nothing' type of DR. After DR, it would take days if not weeks to reestablish the primary site. Just moving the most important applications to the DR site wouldn't work. There you are. Spent millions for a DR site, but when disaster strikes it is effectively unusable. The 20 DG database customer performs switchovers whenever they like. They feel comfortable with it, because they can practice, and practice on a small scale. They can upgrade one storage cabinet, move the old one to the DR site, step by step, without major downtime, whithout storage vendor/type/serial number/firmware release lock-in. Replication of Raid 1-0 to Raid 5, whatever combination of tastes you like, it's all possible.

Then another customer: They're on the eve of moving > 100 databases from legacy HW to new big iron. Moving from > 40 to 6 servers (oracle alone). All the database moves are done by DG. It allows the DBA's to do all the preparaton work during office hours. Only cutovers of business critical applications might need nightly or weekend attention. If a cutover fails just reenable the 'old' database.

But, again, I do not want to start a war about solutions, and the last I'd want to get involved with in a shoot out is Kevin J. Probably it isn't even about managebility. It's about business requirements, and how to fullfill them.
Talking about replicaton solutions is like talking about backups. I don't like to talk about backups. I prefer to talk about recovery. If that is set, I can decide what backup I need for the recovery.

Another EUR 0.02

Best regards,

Carel-Jan Engel

If you think education is expensive, try ignorance. (Derek Bok) ===

Received on Tue Sep 19 2006 - 16:17:08 CDT

Original text of this message