Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: I/O tuning... Allocating spindles to databases

RE: I/O tuning... Allocating spindles to databases

From: Marquez, Chris <cmarquez_at_collegeboard.org>
Date: Tue, 13 Sep 2005 23:02:32 -0400
Message-ID: <B30C2483766F9342B6AEF108833CC84E05BD5F90@ecogenemld50.Org.Collegeboard.local>

Bob,

>> We are transitioning our database environment to
>> a new SAN and I am trying to determine a good approach
>> for allocating spindles to databases

>> I'm not losing sleep over mis-allocating a disk
>> here or there, but I would like to make sure we
>> don't err to any particular extreme.

>> Any thoughts would be much appreciated...

"Any thoughts"...here goes;

You have not been "invited" to enough storage meetings and/or the SAN sales guys has not got a hold of you yet. I might be a little...well a lot cynical, but unless everyone is on-board with your (right thinking) DBA mind non of this will happen. Your first statement along made me cringe!

I have seen to big SAN implementations.
The prevailing wisdom (sales pitch) is "the SAN is huge and everything will be cached" and "you will have more I/O than you can imagine".

<FORWARD 6 MONTHS LATER>

You find yourself in a detailed I/O meeting regarding the now install and running Black Box SAN. I was on one site where I was seeing >20 milsec I/O Read Time from V$FileStat...it was if someone had two hands on the disk!

I ask one disk admin how do I know that the constant FULL TABLE SCAN of db1 is not effecting the performance of db2 all shoved into the same SAN. Answer, "you don't know". :o|

You are doing some good planning, but one of the hardest things I found when working with a good group of SA's and disk admins was; One; the level of physical and virtual abstraction from the actual disk to the datafile...arrays, LUN's, volumes, paths, etc....many levels and layers with a big high end SAN. Two; the good sales guys and disk admins will tell you that you need to be careful when trying to do good...trying to plan. The high ends SAN's have lots of smarts and you can end up un-doing some of the good stuff already built into the SAN (layout)...like stripe a stripe...I had to go over this stuff again and again to grasp it.

In the end for one critical database we ended up using or dedicating some SAN arrays to specific database...as if the disk were (physically) attached to that DB server...I think you said you would do that too.

Good luck and good planning you are doing.

Chris Marquez
Oracle DBA

-----Original Message-----
From: oracle-l-bounce_at_freelists.org on behalf of Murching, Bob Sent: Tue 9/13/2005 7:09 PM
To: 'oracle-l_at_freelists.org'
Subject: I/O tuning... Allocating spindles to databases  

Hello!

We are transitioning our database environment to a new SAN and I am trying to determine a good approach for allocating spindles to databases running on various boxes. This in turn will determine which boxes need which paths to which LUNs. Assumptions are fixed number of spindles available (about 100) and that spindle allocation only takes into account performance needs, not capacity. We have essentially four big I/O intensive databases... call 'em Prod1, Prod2, QA1 and Dev1, the latter two generally not in archive log mode but heavily used by various internal groups. Another assumption, we'd be using some fancy LVM or ASM or what-have-you so we can move spindles around after the transition to fine-tune... so I'm only trying to get rough ideas here.

What we've been doing is measuring physical reads/writes (tablespace datafiles, temp/undo, does *not* include online redo logs) per database, and measuring amount of redo blocks generated per database. Based on the relative % each database consumes of the total IO load in our environment, we allocate the appropriate percentage of spindles... so if Prod1 consumes 50% of the I/O and I have 100 disks to play with, Prod1 would get 50 disks, divvied up into some collection of RAID groups of course.

The questions,
1. First off, am I completely off-base in my approach here? 2. If I'm taking the right approach, then how do I weigh redo blocks generated vs. physical reads+writes when determining where to allocate disk? 2a. How do I determine how many I/Os should be allocated to archive log destinations? Does the log destination require fast disk vs. online redo log locations? Or not?
3. How best to divvy up disks into RAID groups? Assume HW RAID-10 in the SAN. What's better, letting ASM logically stripe data over a triplet of 2+2 striped-mirrors, or just tossing everything into a single 6+6 striped-mirror? The former would give me a heck of a lot more granularity to move storage around when fine-tuning, and might be better for OLTPish "single row lookup" type environments whereas the latter might be better suited toward sustained "sequential-ish" access patterns. It's just a guess, however, and while we have production stuff running on ASM, we don't have enough empirical data to assess whether plaiding (SAN HW stripe + ASM SW stripe) is good, bad or merely ugly.

I'm not losing sleep over mis-allocating a disk here or there, but I would like to make sure we don't err to any particular extreme. Any thoughts would be much appreciated...

Bob

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Sep 13 2005 - 22:05:48 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US