Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: tuning NFS for large backups

Re: tuning NFS for large backups

From: Michael Heiming <michael+USENET_at_www.heiming.de>
Date: Wed, 29 Nov 2006 13:20:24 +0100
Message-ID: <9ha044-t26.ln1@news.heiming.de>


In alt.os.linux J.O. Aho <user_at_example.net>:
> NetComrade wrote:

> Hello,

> Fist of all I'm no expert on these things, but have a little bit of
> experience which may give you some help until the real profs make their posts.
> I have rearranged your original post a little bit.

> > Unfortunately the disk array in this particular
> > case seems to 'suck' badly (EMC Dell AX100) as it barely does over
> > 20megs per second

> Not sure if you r array is IDE or SCSI based, but seems you need to tune the
> system a bit here to gain speed (even my old slow Sparc Ultra 10 almost kicks
> your array in speed). For IDE you use hdparm and for SCSI you use sdparm.

It seems a NAS or SAN appliance, where you will not have much luck with using 'hdparm'.

>> We do Oracle backups to NFS (they go subsequently to tape), and the
>> load on both db side and NFS side spike significantly. The load spikes
>> on DB are not _that_ large, however load spikes on NFS are very large
>> (run queue in 20-50), especially during 'level 0' backups which are
>> .5TB to 1TB in size.

> Using larger buffers do make things better and if you are using gigabit lan,
> then increasing MTU to 9000 can be a quite good thing to do.

I don't know if this is NAS or SAN at all. Though jumbo frames can help if the rest of the network equipment supports them.

> See to this is done before you start the nfs export on the nfs-server, you may
> need to adjust the buffer sizes, so it gets more optimal for your system.

> echo "1048560" > /proc/sys/net/core/rmem_default
> echo "1048560" > /proc/sys/net/core/wmem_default
> echo "2097136" > /proc/sys/net/core/rmem_max
> echo "2097136" > /proc/sys/net/core/wmem_max
> echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem
> echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

> Another thing to think on the nfs-client side is that you see to mount NFS3
> (in case you aren't already using NFS4), add the nfsvers=3 option to your
> /etc/fstab as option for the nfs mount, I have noticed that at least on my
> system there been mounts as nfs2 if the option haven't been supplied.

The first question is if the storage box can deliver more speed in the recent setup at all? There might be already the problem where some Linux kernel tuning won't help at all.

Though there are multiple possibilities that can lead to mentioned performance problems. It seems the OP should really get someone who knows what he is doing onside to debug problems and come up with the solution to enhance performance.

This seems to me a bit much for asking some newsgroup, especially if you don't really know what you are doing.

-- 
Michael Heiming (X-PGP-Sig > GPG-Key ID: EDD27B94)
mail: echo zvpunry_at_urvzvat.qr | perl -pe 'y/a-z/n-za-m/'
#bofh excuse 24: network packets travelling uphill (use a
carrier pigeon)
Received on Wed Nov 29 2006 - 06:20:24 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US