RE: Orion, NFS, & File Size

From: CRISLER, JON A <JC1706_at_att.com>
Date: Tue, 19 Jun 2012 15:20:12 +0000
Message-ID: <9F15274DDC89C24387BE933E68BE3FD32DC4C0_at_MISOUT7MSGUSR9D.ITServices.sbc.com>



You should create a number of datafiles at 30gb size- perhaps 5 or 10. Orion will kick off multiple threads (you can control the number). Once you have a db set up you can also use the calibrate i/o package for some easy benchmarks.

The Netapp aggregates need to be planned out: you will find that REDO needs to be segregated from other datafiles, so the "aggregates" on the Netapp storage backend need to be well thought out. If you have multiple systems (dev, test, etc) - do not allow them to share aggregates with your prod system as they will step on each other. In other words- keep your prod aggregates segregated from non-prod aggregates.

I would not create a single NFS volume: I would use at least 4: datafiles, redo, flash recovery area, OCR-VOTE. This gives you a bit more parallelism at the OS level for I/O. When you get into testing, you should thoroughly test the Netapp controller failover - failback (switchover - giveback ?) - we have found that this is frequently misconfigured or problematic, so you need to test it to make sure the config is correct. If correctly configured it works fine, and you need this feature for NetApp maintenance like OnTape patches, etc.

A lot of this is NetApp terminology- share this with your NetApp storage administrator and they will understand.

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Austin Hackett Sent: Monday, June 18, 2012 3:49 PM
To: Oracle-L_at_freelists.org
Subject: Orion, NFS, & File Size

Hello List

I'm preparing to build a new 11.2.0.3.2 RAC cluster on OEL 5.4 (the latter isn't something I can change at the moment). The shared storage is a NetApp filer via NFS. Prior to Oracle installation, I plan to use Orion to check the storage is performing as expected (I'll also use SLOB post-install).

According to section "8.4.6 Orion Troubleshooting" (http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#BABBEJIH ) in the 11.2 Performance Tuning manual:

"If you run on NAS storage ... the mytest.lun file should contain one or more paths of existing files ... The file has to be large enough for a meaningful test. The size of this file should represent the eventual expected size of your datafiles (say, after a few years of use)"

Assume the following about what the DB will look like in a few years:

  • All my datafiles will be on a single NFS volume
  • The datafiles will total 1TB in size
  • No individual datafile will be larger than, say 30GB

Does the statement in the manual mean that:

I should use dd to create 1 x 30G file on the volume i'll be using for the datafiles

or

I should use dd to create a number of files on the volume i'll be using for the datafiles, 30GB in size, totaling 1TB

I'm interpreting it as meaning the former, but had hoped to sanity check my thinking

If anyone could offer any help, it would be much appreciated...

Thanks

Austin

--
http://www.freelists.org/webpage/oracle-l


--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jun 19 2012 - 10:20:12 CDT

Original text of this message