Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: How many of you use S.A.M.E?

Re: How many of you use S.A.M.E?

From: Mark Brinsmead <pythianbrinsmead_at_gmail.com>
Date: Fri, 2 Feb 2007 22:07:33 -0700
Message-ID: <cf3341710702022107wb4bed64q7fc17aea090cd319@mail.gmail.com>


Thank you, Christian.

This is an excellent, concrete example of what I was trying to articulate earlier.

You've only shown part of the story.

First, you have demonstrated for Amonte that you can indeed (strongly) influence where the data is placed on the device. Providing you have the correct level of control, and that the device itself is not telling you too many lies... ;-)

Second, you have demonstrated that sequential I/O is (usually) noticeably faster on the outer tracks of the disk than on the inner ones.

What you have missed is the effect you can get on RIOPS (Random I/Os Per Second).

Try something like this:

Trial 1: Create one partition spanning the entire disk, and perform as many random reads as you can within that partition. Measure the rate.

Trial 2: Create a partition spanning the outer 20% of the disk. Perform as many random reads as you can within that partition. Measure the rate.

Trial 3: Create two partitions, one on the outer 10%, and one on the inner 10%. Concurrently perform as many random reads as you can within these partitions. Measure the rate.

Now, its the same physical device in all three cases. The rotational latency and average seek time is the same in all cases, so the RIOPS should be the same in all cases, right?

Of course not. In the first case, you ought to get something very close to the "nominal" RIOPS advertised in (or derived from) the disk drives data sheet.

In the second case, you worst-case seek will only be 1/5th of a "full-bore" seek. Even allowing for acceleration, deceleration, and "settling" time, the effective average seek time in the second case will be much lower than in the first case. Not 80% lower, but maybe 40% lower. This will translate to faster I/Os. Not 40% faster, but maybe 15% or 20%.

In the third case you will (or should) be forcing the disk to perform -- on average -- much longer seeks than the average in case #1. (In case #1, the average seek should be 50% of a "full-bore" seek; in this case you ought to be averaging closer to 90%). This will translate to much lower RIOPS.

Bonus Assignment:

Repeat trials #1 and #3 with large sequential I/Os. Maybe 100MB at a time. (For testing purposes, maybe 32KB I/Os, even if your OS allows more.) Measure throughput instead of RIOPs. Watch the throughput tank in latter scenario! In this case, throughput is likely to drop by 90% or so. (It did the last time I performed such a test, but that was more years ago than I care to think about!)

Note: I have performed tests like these -- a very long time ago. The results I have described, however, are less "recollection" than "forecasts" based on a fairly simple model of diskdrive physics. Actual mileage will vary -- especially if you're using hardware "smarter" than a typical SATA drive...

On 2/2/07, Christian Antognini <Christian.Antognini_at_trivadis.com> wrote:
>
> > By the way have you tried by chance what Loaiza suggested,
> > putting data in specfic physical sectors of a hard drive?
> > I am really curious how can that be achieved. S.A.M.E is
> > simple but putting data as he says makes life impossible
> > dont you think so?
>
> Hi
>
> I'm just formatting a new SAS disk, therefore here an example...
>
> With fdisk I divided the disk in 4 partitions which have the same size.
> Basically for each of them I simply specify the start/end cylinders.
>
> At the end I have the following situation:
>
> [root_at_helicon .vnc]# fdisk -l /dev/sda
>
> Disk /dev/sda: 73.4 GB, 73407820800 bytes
> 255 heads, 63 sectors/track, 8924 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sda1 1 2231 17920476 83 Linux
> /dev/sda2 2232 4463 17928540 83 Linux
> /dev/sda3 4464 6695 17928540 83 Linux
> /dev/sda4 6696 8924 17904442+ 83 Linux
>
> And here a quick performance test with hdparm:
>
> [root_at_helicon .vnc]# hdparm -t /dev/sda?
>
> /dev/sda1:
> Timing buffered disk reads: 252 MB in 3.01 seconds = 83.73 MB/sec
>
> /dev/sda2:
> Timing buffered disk reads: 234 MB in 3.01 seconds = 77.62 MB/sec
>
> /dev/sda3:
> Timing buffered disk reads: 214 MB in 3.00 seconds = 71.27 MB/sec
>
> /dev/sda4:
> Timing buffered disk reads: 190 MB in 3.01 seconds = 63.22 MB/sec
>
>
> HTH
> Chris
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

-- 
Cheers,
-- Mark Brinsmead
   Senior DBA,
   The Pythian Group
   http://www.pythian.com/blogs

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Feb 02 2007 - 23:07:33 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US