Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Data Mirroring on two data centers -- How to use ASM ?

Re: Data Mirroring on two data centers -- How to use ASM ?

From: Tanel Põder <>
Date: Sun, 21 May 2006 10:45:10 +0800
Message-ID: <0a7701c67c80$95925090$6401a8c0@porgand>


Yeah, I picked your email just as the last one in thread, I do definitely agree with you on that the need for solution should be derived from proper requirements, not vice versa.

Btw, one more thing if talking about redolog level replication versus storage replication is that as storage doesn't normally know anything about Oracle's block formats then it doesn't have any choice but replicate all corruptions to remote site as well. Oracle's redo apply mechanism would detect most of such issues...


  Well, Tanel,

  I perfectley understand you read Data Guard between the lines in my answer ;-). However, it wasn't there. I stayed away from mentioning Data Guard deliberately, because that is just another solution. In this stage, when requirements are not clear, it's to early to talk solutions.

  Having said that, I totally agree with your point. I would add to it that storage based replication of the whole database also suffers from archiving the redo. Those blockchanges have to be replicated too. The same happens to controlfile updates. It will all get worse when you have multiple controlfiles and online redo log file group members.   Apart from saving bandwidth Data Guard allows you to maintain a delay in applying archives. This feature helps to survive hardware failures AND 'human' failures, like accidently dropping/truncating a table, 'new application features' and so on. Several CT sites have survived serious human errors thanks to this feature. SAN replication would have replicated the disaster immediately, leaving the DBA with no other option than a time consuming TSPITR.

  For the SAN-based archive replication I'd suggest to set ARCHIVE_LAG_TARGET=n. This will force log switches every n seconds, also when the log file isn't full yet.

        Best regards,

        Carel-Jan Engel

        If you think education is expensive, try ignorance. (Derek Bok)

  On Sat, 2006-05-20 at 22:03 +0800, Tanel Põder wrote:

    One more reason to use data guard instead of storage/LVM level replication in high-activity OLTP environments is that redolog entry based shipping is much more more fine grained than storage block level replication.      

    I once asked one EMC admin, they told that the minimum block size for SRDF is 32k. So if you update one row and commit, you'd need to ship few hundred bytes to standby, while with SRDF you'd need to transfer 32 kilobytes over the fibre when the block is written to disk plus you need to continuously transfer redolog writes before the datafile blocks are sent to remote.      

    If your management definitely requires SAN based replication, then you could just keep your archivelogs on a replicated volume/storage and do frequent log switches to keep the lag small in case of primary failure.      


      Hi Madhu,

      I'm wondering your primary 'requirement' of mirroring data across TWO data centers.

      IMHO, mirroring between data centers is a solution, or if you like, tool. Whatever, it isn't a requirement.

      Requirements could be something like:

- After a server failure, the database should be available again within 30 minutes
- After a server failure, no more than 5 minutes worth of transactions may be lost
- After a database corruption, the database should be available again within 6 hours
- After a database corruption, no more than 30 minutes of transactions may be lost
- Restoring of the database to any point in time between now and now - 6 days must be possible
Received on Sat May 20 2006 - 21:45:10 CDT

Original text of this message