Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: tuning NFS for large backups

Re: tuning NFS for large backups

From: J.O. Aho <user_at_example.net>
Date: Wed, 29 Nov 2006 12:33:16 +0100
Message-ID: <4t59ftF12jn8sU1@mid.individual.net>


NetComrade wrote:

Hello,

  Fist of all I'm no expert on these things, but have a little bit of experience which may give you some help until the real profs make their posts. I have rearranged your original post a little bit.

 > Unfortunately the disk array in this particular
 > case seems to 'suck' badly (EMC Dell AX100) as it barely does over
 > 20megs per second

Not sure if you r array is IDE or SCSI based, but seems you need to tune the system a bit here to gain speed (even my old slow Sparc Ultra 10 almost kicks your array in speed). For IDE you use hdparm and for SCSI you use sdparm.

If your hard drives are different from each other, then you have to experiment a bit more, as different types of hard drives, will have different optimal values.

I use for one of my hard drives (IDE) the following:

hdparm -A1 -d1 -X udma5 -c3 -W1 -u1 -a32 -m16

another uses:

hdparm -A1 -a64 -d1 -X69 -c3 -W1 -u1 -m8

You should be able to tweak up the speed on the array.

> We do Oracle backups to NFS (they go subsequently to tape), and the
> load on both db side and NFS side spike significantly. The load spikes
> on DB are not _that_ large, however load spikes on NFS are very large
> (run queue in 20-50), especially during 'level 0' backups which are
> .5TB to 1TB in size.

Using larger buffers do make things better and if you are using gigabit lan, then increasing MTU to 9000 can be a quite good thing to do.

See to this is done before you start the nfs export on the nfs-server, you may need to adjust the buffer sizes, so it gets more optimal for your system.

echo "1048560" > /proc/sys/net/core/rmem_default
echo "1048560" > /proc/sys/net/core/wmem_default
echo "2097136" > /proc/sys/net/core/rmem_max
echo "2097136" > /proc/sys/net/core/wmem_max
echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

Another thing to think on the nfs-client side is that you see to mount NFS3 (in case you aren't already using NFS4), add the nfsvers=3 option to your /etc/fstab as option for the nfs mount, I have noticed that at least on my system there been mounts as nfs2 if the option haven't been supplied.

Hope there is something that makes things better for you.

  //Aho Received on Wed Nov 29 2006 - 05:33:16 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US