NetApp NFS I/O contention masked as ??? (CPU?)

From: Dan Norris <dannorris_at_dannorris.com>
Date: Tue, 08 Jul 2008 12:16:28 -0600
Message-ID: <4873AEFC.3090506@dannorris.com>


I have (the same) 3-node RAC 10.2.0.3 64bit cluster on RH4 x86-64. It uses NetApp NFS for shared storage. All seems to be configured correctly per the docs and various online sources (mount options, buffer sizes, kernel parameters, etc). However, since the Linux kernel sees the NFS traffic as network instead of iowait, it's difficult to determine when I/O is the bottleneck. Certainly, the DB wait events are one indication, but is there some magic at the O/S to see/detect NFS I/O waiting? I know that nfsstat exists, but I don't think that provides information on contention or service times--just counters.  The v$filestat information shows good service times (<6ms avg) for all datafiles. However, for a specific timeframe, is there a way to measure potential I/O bottlenecks?

Finally, is it reasonable to assume that since network I/O requires some CPU that heavy NFS I/O will be at least partially masked by high CPU utilization?

Tuning NetApp I/O is a bit new to me, so pointers on tools, techniques for measuring contention and utilization will be helpful. If it helps, the NFS I/O is using dedicated network interfaces on the hosts.

Thanks,
Dan

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jul 08 2008 - 13:16:28 CDT

Original text of this message