Re: Network Issue
Date: Sun, 20 Jul 2008 23:47:48 +0200
On Sun, 20 Jul 2008 11:38:32 -0700 (PDT), raghu.vnin_at_gmail.com wrote:
>We are replatforming our datawarehouse from Oracle 9i in Sun Solaris
>to 10g in IBM AIX 5.3. There is a direct connect network link
>(Gigabit) setup between the two servers. The plan is to transfer the
>data over a dblink.
>When we tested the network link, we got 50G per hour using binary mode
>whereas ascii mode gave us 3G per hour. There is no other traffic
>through this direct connect link. Data transferred over DBLink also
>gave the same results as the ascii test.
Can I assume you used ftp for this test? Ftp does no special buffering
and it's packets won't be bigger than the MTU of the network card.
Talking of it, assuming we are discussing 1 Gb Ethernet, did you set
the jumboframes property to true on the IBM side? (I'm not aware of a
corresponding Solaris setting, but is must be there). This will
increase the MTU from 1500 to 9000, so you will have less latency
>1. Is this difference between speeds because the different servers
>(sun and ibm)? There must be some kind of data translation between the
There is not
>2. What can be done at the Oracle level to overcome this?
You can set SDU in listener.ora (server) and in tnsnames.ora (client) to a multiple of the MTU of the card with a maximum of 32k.
You could also think of a different strategy. I found out using nfs is
way faster than scp, provided you set up the nfs mount point with 32k
In that case you would use good old export/import to overcome your MTU problems.
-- Sybrand Bakker Senior Oracle DBAReceived on Sun Jul 20 2008 - 16:47:48 CDT