RE: Orion, NFS, & File Size

From: CRISLER, JON A <JC1706_at_att.com>
Date: Tue, 19 Jun 2012 17:50:58 +0000
Message-ID: <9F15274DDC89C24387BE933E68BE3FD32DC8F9_at_MISOUT7MSGUSR9D.ITServices.sbc.com>



The main advantage that I see (and I might be overlooking something) with higher numbers of data files comes down to RMAN backup and possible parallel query. More data files and more rman streams generally gives you better performance, compared to a few very large data files. Using the "segment size" feature in RMAN can help in 11g if you have very large data files.

I am concerned that you might not have enough disks overall, but it depends on your NetApp model, disk type, cache size etc. If your using some NetApp Snapshot product, make sure you follow the best practices on where to locate control files, temp files, redo etc., so your snapshots are most efficient.

-----Original Message-----
From: Austin Hackett [mailto:hacketta_57_at_mac.com] Sent: Tuesday, June 19, 2012 12:48 PM
To: CRISLER, JON A; Oracle-L_at_freelists.org Subject: Re: Orion, NFS, & File Size

Hi Jon

Many thanks for your response. We're an existing NetApp shop here (although I'm pretty new to the organisation), and are currently doing what you suggest. The failover of the controller wasn't something in my HA testing plan, so thanks for the heads up.

In terms of Orion testing, the current plan after some further research today is to create 28 x 30 gb files, and then run the test with num_disks = 28. The dedicated data file volume will be on an aggregate that consists of 2 raid dp groups , each with 16 disks e.g. 2 x 16 - 4 parity = 28. Does that sound like a plan, or do you tend to see little value in file counts greater than 10?

Thanks

Austin

On 19 Jun 2012, at 16:20, "CRISLER, JON A" <JC1706_at_att.com> wrote:

> You should create a number of datafiles at 30gb size- perhaps 5 or 10. Orion will kick off multiple threads (you can control the number). Once you have a db set up you can also use the calibrate i/o package for some easy benchmarks.
>
> The Netapp aggregates need to be planned out: you will find that REDO needs to be segregated from other datafiles, so the "aggregates" on the Netapp storage backend need to be well thought out. If you have multiple systems (dev, test, etc) - do not allow them to share aggregates with your prod system as they will step on each other. In other words- keep your prod aggregates segregated from non-prod aggregates.
>
> I would not create a single NFS volume: I would use at least 4: datafiles, redo, flash recovery area, OCR-VOTE. This gives you a bit more parallelism at the OS level for I/O. When you get into testing, you should thoroughly test the Netapp controller failover - failback (switchover - giveback ?) - we have found that this is frequently misconfigured or problematic, so you need to test it to make sure the config is correct. If correctly configured it works fine, and you need this feature for NetApp maintenance like OnTape patches, etc.
>
> A lot of this is NetApp terminology- share this with your NetApp storage administrator and they will understand.
>
> -----Original Message-----
> From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Austin Hackett
> Sent: Monday, June 18, 2012 3:49 PM
> To: Oracle-L_at_freelists.org
> Subject: Orion, NFS, & File Size
>
> Hello List
>
> I'm preparing to build a new 11.2.0.3.2 RAC cluster on OEL 5.4 (the latter isn't something I can change at the moment). The shared storage is a NetApp filer via NFS. Prior to Oracle installation, I plan to use Orion to check the storage is performing as expected (I'll also use SLOB post-install).
>
> According to section "8.4.6 Orion Troubleshooting" (http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#BABBEJIH
> ) in the 11.2 Performance Tuning manual:
>
> "If you run on NAS storage ... the mytest.lun file should contain one or more paths of existing files ... The file has to be large enough for a meaningful test. The size of this file should represent the eventual expected size of your datafiles (say, after a few years of use)"
>
> Assume the following about what the DB will look like in a few years:
>
> - All my datafiles will be on a single NFS volume
> - The datafiles will total 1TB in size
> - No individual datafile will be larger than, say 30GB
>
> Does the statement in the manual mean that:
>
> I should use dd to create 1 x 30G file on the volume i'll be using for the datafiles
>
> or
>
> I should use dd to create a number of files on the volume i'll be using for the datafiles, 30GB in size, totaling 1TB
>
> I'm interpreting it as meaning the former, but had hoped to sanity check my thinking
>
> If anyone could offer any help, it would be much appreciated...
>
> Thanks
>
> Austin
>
>
>
>
>
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jun 19 2012 - 12:50:58 CDT

Original text of this message