RE: Best practices using Dell Equalogic San Block-based Replication

From: Goulet, Richard <>
Date: Fri, 4 Jan 2013 20:08:52 +0000
Message-ID: <>

                All true, but when the infrastructure team makes a decision on how often their going to replicate the files, especially "across the pond", you're kinda stuck with it, as we are.

Richard Goulet
Senior Oracle DBA/NA TEAM Lead

This communication, including any attachments, is intended only for the person or entity to which it is addressed and may contain confidential material. Any review, retransmission, distribution or other use of this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please destroy any copies, contact the sender and delete the material from any computer. Thank you.

From: Matthew Zito [] Sent: Friday, January 04, 2013 3:01 PM
To: Goulet, Richard
Subject: Re: Best practices using Dell Equalogic San Block-based Replication

The every 24 hour method works, but there are some craftier tricks for how you can leverage a NetApp to streamline this a little bit. Since the replication only transfers the deltas, sometimes it actually works better to kick off the replication every hour, so there isn't a huge queue of pending changes.

Another mechanism that works a little better is to create a separate FlexVol for your archive logs and put those on a 15 minute (or more) replication schedule. That way, even if your replicated database is six hours behind, the archive logs aren't more than 15 minutes behind.

This will all work a little differently for equalogic because it's gonna be a block-based solution instead of a file-based solution - and you have synchronous, semi-synchronous, and asynchronous replication solutions. Dell should have white pappers on best practices with Oracle.

The big benefits for doing it at the storage array level is reduced CPU utilization on the database, teh ability to replicate non-database content (scripts, logs, images, app code, wahtever), database portability (i.e. whether you're using mysql, oracle, sql server, you can use the same replication technique). There's also a perception (fairly or not) among technology folks that storage replication from one of the major storage vendors is more reliable than in-database technologies.


On Fri, Jan 4, 2013 at 2:44 PM, Goulet, Richard <<>> wrote: April,

        We use a similar technology but from NetApp instead. The fun is how often the replication fires. Ours does so every 24 hours so any database in the dr region can be as much as 24+ hours behind its primary. When you start the db it will go through crash recovery because that's what the db thinks happen. The FUN is how the software replicates your data. NetApp does it by taking a snapshot of the entire db at a specific moment in time so the control files and redo logs line up as if the db just crashed. Other packages do similar things with mirrors (and you thought smoke and mirrors was only for Hollywood!) which again gives you a consistent picture of the db. I have heard of a couple of products that are not that smart & you can then have issues. Don't know if your product is one of them. A lot of the success or failure of doing this is what can the business tolerate for stale data and downtime? While this can be a cheaper solution your data at the remote site   is not
  going to be available until you mount the file systems and start up the databases. Also don't forget you can't have an HP-UX server over here and a Linux one over there. Doesn't work, BTDT because some management types believe they know better. Thank you.

Received on Fri Jan 04 2013 - 21:08:52 CET

Original text of this message