RE: Bandwidth management for DC migration

From: CRISLER, JON A <JC1706_at_att.com>
Date: Fri, 3 Feb 2012 23:54:51 +0000
Message-ID: <9F15274DDC89C24387BE933E68BE3FD31A6532_at_MISOUT7MSGUSR9D.ITServices.sbc.com>



If you are on 11gR2, there is new feature, sometimes considered undocumented, to allow for dynamic compression of archive logs. This will save some bandwidth for Dataguard configurations, but I don't think it helps for the standby redo logs and Real Time apply. But it can certainly save some bandwidth.

If you run a local RMAN, using compressed backupsets, the resulting backup will give you an idea of what size has to be moved to the new data center. Just make sure your backup clone uses compressed backupsets. Based on past experience I would guess the resulting rman backup to be in the 200-300gb range, but this is entirely data dependent: some data compresses better than others. I once had a db that was 95% mpeg and jpeg and the resulting compressed RMAN backup was more or less the size of the raw datafiles, as mpeg and jpeg are already compressed and rman could not compressed them any further. Assuming that you are using real-time apply, which (I am guessing) doubles the amount of data transferred, you are now looking at 60gb over 4 hours, or 15gb per hour, or 4.16 megabytes per second (15,000 mb / 3600 ).

Based on 4.16 MB per sec you are looking for at least OC-1 size (which is roughly 5 MBps), and at that you will be running about 100% capacity at peak loads. OC-3 is going to give you around 15 MBps rates so that might be overkill. You might have some other offerings in the MPLS or older DS-3 available. There are also latency issues to consider- this is definitely not a case where you want to run DataGuard in Max Protection mode :) You don't say where in Asia but I would not be surprised to see 100ms + latency. Also, keep in mind that there are long lead times to install this sort of bandwidth, unless you already have a datacenter where you are sharing a high-capacity line like OC-12, OC-48 etc. And the monthly cost for this sort of link - wow, it will be expensive.

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Hemant K Chitale Sent: Friday, February 03, 2012 6:16 AM
To: ORACLE-L
Subject: Bandwidth management for DC migration

I am in a Data Centre Migration project where we are moving ebiz data and apps from a US data centre to Asia. We have DataGuard configured but the bandwith is a constraint when handling the daily spike in redo volumes. What options (software / hardware / business / users / concurrent requests ) do you explore? Typically what bandwidth is used to clone a 600GB database and daily volumes of 65GB of redo (spike if 30GB in a 4-hour peak window) ?
Hemant K Chitale

sent from my smartphone

--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l Received on Fri Feb 03 2012 - 17:54:51 CST

Original text of this message