Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: 64k or 128k stripe - 8k block 8 multiblock read?

Re: 64k or 128k stripe - 8k block 8 multiblock read?

From: Noons <wizofoz2k_at_yahoo.com.au>
Date: Sat, 20 Sep 2003 00:35:38 +1000
Message-ID: <3f6b1430$0$14558$afc38c87@news.optusnet.com.au>


"Domenic G." <domenicg_at_hotmail.com> wrote in message news:c7e08a19.0309190544.72467dec_at_posting.google.com...
> Thanks Nuno. One site was suggesting to set it at 16k!! That would
> mean that one logical I/O of 64k on a FTS would generate 4 physical
> I/Os (assuming the stripe size per disk was 16k and there were 4 disks
> in the stripe set). His reasoning was that this would balance the I/O
> evenly over all 4 disks in the set. That made absolutely no sense to
> me at all because then you'd have to wait for all 4 physical I/Os to
> complete. Wouldn't that be slower?!

Not necessarily. If you look at Connor's reply, he also suggests something similar. A number of things to be aware of here:

A stripe size of 16K may actually mean 16K on EACH disk of a given stripe set. In which case the total size of a striped I/O is 16K X number of disks in stripe set, with potential parallelism being achieved by specifying a DFMBR value = 16KX#disks.

For another manufacturer, it may mean 64K for ALL disks in a stripe set. In which case it is 4X16K, IF you striped across 4 disks. And potential parallelism being achieved by reading 4 disks in parallel. You want to set DFMBR value to 64K. Ie, ignore the # disks in this case, set to stripe size only.

Some manufacturer's software will initiate the I/O on each disk in parallel, and "construct" in memory the stripe "block" as results arrive. Others may initiate seeks in parallel to each disk and sequentially read/write. It depends on the "smarts" built into your LVM layer, your controllers and their number, the number and distribution of the disk across controllers and a few others.

Then you have the smarts of the actual file system to consider. Veritas is a good example of a very smart one with all sorts of optimisations. Linux ext2 is a good example of one with very few smarts, plain vanilla I/O mapping. What happens to the file system I/O logic when for example the disks involved ALSO have straight files and folders used for example to store programs, or temp OS files?

Now, some key thoughts to keep here are, IMHO:

1- Do you know or can you find out ALL the above details about your particular case?
2- Can your hardware really accept multiple disks transferring at (potentially) disk cache retrieve speed, simultaneously? 3- Can the I/O bus itself cope with that? 4- Are the datafiles in each disk sufficiently contiguous that the NEXT request for a stripe "block" can be satisfied with only rotational latency coming into play?
5- Are your disks and partitions and file systems sufficiently isolated that you will NOT have other file-system-initiated access interfere with your planned parallel activity?

And a few other items I won't mention to make sure the whole lot remains relatively clear.

You can see that while 64K and larger may sound like a good idea, in *some* cases they may actually create a saturation problem for your I/O system. Hence the concept of using a smaller size.

You can also see why sometimes it's better to only consider this level of optimisation IF you have full control over disk and file system placement and use. Or you are using raw partitions, where the effect of file system software is not relevant.

Like Connor said and at the risk of sounding horribly repetitive: "it depends".

-- 
Cheers
Nuno Souto
wizofoz2k_at_yahoo.com.au.nospam
Received on Fri Sep 19 2003 - 09:35:38 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US