Re: Disk structures

From: Stefan Koehler <>
Date: Mon, 22 Feb 2016 20:50:59 +0100 (CET)
Message-ID: <>

Hi Robert,
just in addition to Kellyn's reply.

Be aware that "db file scattered read" is not necessarily the same every time. "db file scattered read" (= multi-block I/O) could be 2 blocks or up to db_file_multiblock_read_count (or better said "_db_file_exec_read_count") blocks - this depends on extent boundary and/or cached (single blocks). In consequence the I/O performance could also be very different even if it is the same wait event. Unfortunately AWR (or STATSPACK) does not provide the needed information to distinguish.

Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher Homepage:
Twitter: _at_OracleSK  

> "Storey, Robert (DCSO)" <> hat am 22. Februar 2016 um 19:31 geschrieben:
> Sanity check. Am I overthinking / overteching the issue.
> Moving to a new server. Current server has the storage internal, 6 drives configured into a RAID 10. Overall performance is good, but, reads I
> think could be a bit better. Most of my waits on my biggest “waits” is for scattered file reads.
> I have 45 gig of data files and 25 gig of index files. The current RAID10 holds all my files, redo, etc. Just on different logical volumnes.
> So, the new server has 12, 600g, 15K spin SAS drives. It also has 2, 300gig 15K sas drives. My plan is to put my redo log groups on the 300 gig
> drives, one group per drive, along with a control file copy. That way redo has its own spindles.
> Of my 45 gig of data files, one file is 12 gig and contains a single tablespace with a single table. It’s basically my audit table for all actions
> from within the application. Every application action gets logged to that table. About 25 mill rows that I keep trimmed. Has a matching 12 gig
> IDX file.
> I’m debating between three configs, trying to figure out what gives me the better “read”. I estimate that about 70% of my system actions are reads.
> Lots of small random writes. No bulk loads. For instance my audit log gets about 65000 inserts a day. There would be another 130K or so inserts
> to the other tables to drive that audit trail.
> Config A
> Create a RAID10 with all 12 drives. 6 pairs of RAID0 which then stripes to make a 3.2tb disk pool. Logical volumes to separate the files, ie, data
> to one volume, idx to the other.
> Config B
> Create 3 separate RAID10’s, each with 4 drives. Put all my DATA files (minus the one large datafile) on one RAID 10 and IDX on the other RAID 10.
> The Third RAID 10 would contain the separate data/idx for my largest table as well as the FRA.
> Config C
> Same as B except all data to one raid, all idx to the other raid, and FRA to the third array.
> Old school was that you wanted your data drives to not compete with the index drives so that reads/writes were occurring concurrently on a data/idx
> file.
> Thoughts…or am I overthinking?
> Thanks..

Received on Mon Feb 22 2016 - 20:50:59 CET

Original text of this message