Re: Using dblinks over distance

From: Andy Sayer <andysayer_at_gmail.com>
Date: Mon, 13 Jan 2020 08:20:00 +0000
Message-ID: <CACj1VR7fhH3UNcuJ89Oqx+JQDySQFzkbiVK0HqVfMMbZ8XGJvQ_at_mail.gmail.com>



The photon from the Sun’s core to the Earth is more about gravity. The speed of electrons in a wire is different, you have a few things going on: the individual speed of one electron (about 0.01c) the drift velocity (super slow, nanomaters per second) and the signal velocity (to do with one end of the wire being effected by current) which is more like the speed of light in a vacuum. This Googled site has a good explanation https://wtamu.edu/~cbaird/sq/mobile/2014/02/19/what-is-the-speed-of-electricity/

Now that I’ve got my money’s worth from my Masters degree in Physics...

You have a few options:
Make the distance between DBs closer (migrate the non cloud to the same data centre, or migrate the cloud back to your premises). Increase the signal velocity (better cables, fibre optics...) Increase the possible bandwidth (more routes, better fibre optics...) Utilise more of the bandwidth you have available (change your code to work so it is less chatty across the link - see where most of the chat is coming from and start from there, favour anything but nested loop joins from one DB to the other)

All these options have money costs and effort costs.

If you do decide to migrate your other DB to the same cloud then remember to think about all the chatting that does at the moment - are you going to make something else worse.

Hope this helps,
Andrew

On Mon, 13 Jan 2020 at 04:52, Tanel Poder <tanel_at_tanelpoder.com> wrote:

> Just realized you mentioned that the updates seem to be the problem. The
> first question is: Are these updates some (PL/SQL) loops that update
> records one at a time? Or rather (big) array updates? If it's just an
> update single record at a time application loop, then you're bound by the
> network RTT latency and no network throughput tuning will help you.
>
> Btw, I think that larger arraysize helping with *selects* helped you
> probably by speeding up the data retrieval from the database to the
> application, not between databases via database link (as I mentioned above,
> the arraysize for dblink communication between two database processes
> should be always set to a max value...)
>
> Tanel
>
>
> On Sun, Jan 12, 2020 at 9:26 PM Tanel Poder <tanel_at_tanelpoder.com> wrote:
>
>> Run Snapper on one of the sessions to see what kind of throughput you are
>> getting and how many SQL*Net roundtrips per second you are doing.
>>
>> Snapper reports metrics like these:
>>
>> - bytes sent via SQL*Net to dblink
>> - bytes received via SQL*Net from dblink
>> - SQL*Net roundtrips to/from dblink
>>
>> Arraysize only affects Oracle server <-> Client Application traffic, the
>> database link arraysize is always set to max (32767 I think). So, setting
>> your client fetch arraysize doesn't affect DBlinks. Assuming that you're
>> trying to move large amounts of rows at a time (and not a PL/SQL loop
>> querying one row per roundtrip), you'll need to tune your network/TCP stack
>> for throughput:
>>
>> 1) Make sure that your app is moving large amounts of rows at a time
>> (large INSERT-SELECTs or SELECTs moving millions of rows at a time)
>> 2) Make sure that your network link/connection between the endpoints is
>> actually capable of producing the throughput you need (use iperf3 app for
>> testing)
>> 3) Enable TCP send/receive buffer sizes to grow to the required size
>> given the ACK latency and throughput requirement
>> - Search for: "Bandwidth Delay Product"
>> - Essentially: TCP send buffer = Desired Bandwidth * ACK Latency
>> 4) "Tuning" Oracle SDUs & TDUs is probably not needed in modern Oracle
>> versions and won't help at all if 1-3 haven't been solved first
>>
>> There's usually little that can be done to reduce ACK latency because of
>> network topology and physics
>>
>> --
>> Tanel Poder
>> https://blog.tanelpoder.com/seminar
>>
>>
>> On Thu, Jan 9, 2020 at 11:29 AM Cohen, Andrew M. <Andrew.Cohen_at_tufts.edu>
>> wrote:
>>
>>> As we move some of our database VMs to the cloud we have come across the
>>> following situation where the database links between databases (which used
>>> to be all on prem) are now being attempted over thousands of miles. This
>>> has caused extremely poor performance. We have found that an arraysize
>>> session parameter assists with some of the select statements, but updates
>>> which use links to get data don’t seem to perform any better with this
>>> arraysize parameter. To make matters more complicated, we are making these
>>> calls in PeopleSoft applications.
>>>
>>>
>>>
>>> We’re using Oracle 18 and 19 databases on Rhel 7 mostly.
>>>
>>>
>>>
>>> Does anyone have any suggestions as to how to improve performance using
>>> database links over distance?
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Andy
>>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Jan 13 2020 - 09:20:00 CET

Original text of this message