Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> I/O tuning... Allocating spindles to databases

I/O tuning... Allocating spindles to databases

From: Murching, Bob <>
Date: Tue, 13 Sep 2005 19:09:25 -0400
Message-ID: <>


We are transitioning our database environment to a new SAN and I am trying to determine a good approach for allocating spindles to databases running on various boxes. This in turn will determine which boxes need which paths to which LUNs. Assumptions are fixed number of spindles available (about 100) and that spindle allocation only takes into account performance needs, not capacity. We have essentially four big I/O intensive databases... call 'em Prod1, Prod2, QA1 and Dev1, the latter two generally not in archive log mode but heavily used by various internal groups. Another assumption, we'd be using some fancy LVM or ASM or what-have-you so we can move spindles around after the transition to fine-tune... so I'm only trying to get rough ideas here.

What we've been doing is measuring physical reads/writes (tablespace datafiles, temp/undo, does *not* include online redo logs) per database, and measuring amount of redo blocks generated per database. Based on the relative % each database consumes of the total IO load in our environment, we allocate the appropriate percentage of spindles... so if Prod1 consumes 50% of the I/O and I have 100 disks to play with, Prod1 would get 50 disks, divvied up into some collection of RAID groups of course.

The questions,
1. First off, am I completely off-base in my approach here? 2. If I'm taking the right approach, then how do I weigh redo blocks generated vs. physical reads+writes when determining where to allocate disk? 2a. How do I determine how many I/Os should be allocated to archive log destinations? Does the log destination require fast disk vs. online redo log locations? Or not?
3. How best to divvy up disks into RAID groups? Assume HW RAID-10 in the SAN. What's better, letting ASM logically stripe data over a triplet of 2+2 striped-mirrors, or just tossing everything into a single 6+6 striped-mirror? The former would give me a heck of a lot more granularity to move storage around when fine-tuning, and might be better for OLTPish "single row lookup" type environments whereas the latter might be better suited toward sustained "sequential-ish" access patterns. It's just a guess, however, and while we have production stuff running on ASM, we don't have enough empirical data to assess whether plaiding (SAN HW stripe + ASM SW stripe) is good, bad or merely ugly.

I'm not losing sleep over mis-allocating a disk here or there, but I would like to make sure we don't err to any particular extreme. Any thoughts would be much appreciated...


Received on Tue Sep 13 2005 - 18:11:24 CDT

Original text of this message