RE: Moving DR site from 30miles to 1600miles

From: Tanel Poder <>
Date: Fri, 11 Apr 2008 00:47:21 +0800
Message-id: <013801c89b2a$8c0c7ee0$3201a8c0@windows01>

You might need a more decent SSH client if your current one is openSSH-based, which apparently does internally limit the send buffer sizes even if your server defaults / maximum are set higher to math WAN latencies.  

Check this link if you want a high-throughput version of SSH:  

64kB of send buffer for 1600 mile "wide" WAN is too low if you want to achieve decent throughput. TCP as a reliable transport protocol needs to have retransmit capability, thus needs to keep all packets in send buffer until it's acknowledged by the other side, thus low buffer will start throttling your throughput if the network roundtrip time is long.  

You can actually calculate how large TCP buffers you need if you want to fill x MB of your Oc3 link. Google for "bandwidth delay product" the formula is very simple ( only three variables in the formula, link bandwidth, TCP buffer size and packet transmit time which you can roughly measure with ping or tnsping )  


Tanel Poder <>  

From: [] On Behalf Of Ravi Gaur
Sent: Wednesday, April 09, 2008 23:39
Subject: Moving DR site from 30miles to 1600miles

Hello all,

We are planning to move our DR site which is currently about 30 miles from production site to ~1600 miles away. We currently have a 4-node RAC setup on our production site that houses 3 production instances (all on Solaris 10). The SAN is Storagetek and we use ASM for volume management. In our testing, we are hitting issues in network transfer rates to the 1600-miles site -- a simple "scp" of 1GB file takes about 21 minutes. We generate archives at the rate of approx 1GB/8minutes. The network folks tell me that the TCP setting is a constraint here (currently set to 64k window-size which Sysadmins here say is the max setting). We have an Oc3 link that can transfer @ 150Mbps (that is what the networking team tells me).

I've an SR open w/ Oracle and have also gone thru few Metalink notes that talk about optimizing the network from dataguard perspective. One of the notes I came across also talks about cascaded standby dataguard setup (one standby local pushes logs to the remote site).

I'm trying to collect ideas how others are doing it under similar scenarios and if there is something we can do to utilize the entire network bandwidth that we have available to us.


  • Ravi

-- Received on Thu Apr 10 2008 - 11:47:21 CDT

Original text of this message