Re: speed of light
Date: Fri, 06 Jun 2008 13:51:09 -0700
joel garry wrote:
> On Jun 6, 3:40 am, kerravon <kerra..._at_w3.to> wrote:
>> We have an Oracle 9 (soon 10) on Solaris 8 system located in
>> and a backup system located in the US. Due to the speed of light,
>> is a 400 ms round trip.
>> Ideally we would like to use DA (line by line) replication from
>> Australia to
>> the US, but for some reason that is being affected by the round trip
>> My guess is that DA is designed to send one bit of data at a time, and
>> thus waits for a response before sending the next bit of data.
>> Is there any option to get DA to do one of:
>> 1. While waiting for acknowledgement from remote, queue data and
>> then send all the queued data in one hit.
>> 2. Have multiple threads of execution, sending data off while waiting
>> for a response.
>> 3. Have the remote database as an NFS mount so that Oracle thinks
>> it is writing locally and passes the data to Unix. Unix immediately
>> acknowledges the write request and then sends of the multiple writes
>> to the remote.
>> Currently we are using Oracle Dataguard to cause the data to be
>> sent to the remote in batches. That does work, ie it can keep up
>> with the transaction flow, but unfortunately means that the remote
>> database lags 10-20 minutes behind the master. I don't understand
>> why this should be the case. Would a 400 msec round trip explain
>> that? Or is this a "feature" of Dataguard?
> > The "feature" is that DG sends archived logs. You can send the logs > more often by switching redo more often. Also, in 10g there are > additional options which may help, depending on your exact > requirements. The network transfer speed (including the settings Dan > mentioned) can make a difference, depending. I don't think the round > trip time should make a difference, but perhaps you can set sqlnet > tracing on and see. > > For my purposes in 9, I've carried over the non-DG old scripted way of > doing things - compress the logs, send them over the slow unreliable > network, then uncompress and apply and scream if it doesn't work. > YMMV. >
>> We are about to write an application to do the replication ourselves,
>> which will read multiple rows from the appropriate application tables,
>> compress the data, write to a table once/minute with the batched
>> data, let Oracle DA replicate that one table, 400 msec response is
>> irrelevant, then have a daemon to decompress the data the other
>> end and populate the relevant application tables.
> > You might also check into materialized views, and there are other > solutions where you don't have to reinvent the wheel. >
>> To my mind, this seems the wrong solution to the problem, and the
>> utilities should be able to cope with the speed of light limits.
>> But I'm not the DBA so can't advise on any 3rd party utilities etc
>> would do the job.
> > There's the Quest solution, but I wouldn't know about that. > > jg > -- > @home.com is bogus. > "The manuals start with some errors built in, and then get out of > date" - Jonathan Lewis
The settings like SDU are not yet able to change the speed of light. But they do significantly affect the transfer speed by optimizing the way Oracle uses the network.
Nor is the speed of light a significant player. We are generally limited, according to the literature, to 1ms/50km (33mi for Americans).
When we teach Data Guard classes here we always create a non-public network for log file shipping with its own port and its own listener. Why compete with public traffic if you can avoid doing so.
-- Daniel A. Morgan Oracle Ace Director & Instructor University of Washington damorgan_at_x.washington.edu (replace x with u to respond) Puget Sound Oracle Users Group www.psoug.orgReceived on Fri Jun 06 2008 - 15:51:09 CDT