Re: How to determine stripe size and stripe width on HPUX 11.11 for MBRC

From: Greg Rahn <greg_at_structureddata.org>
Date: Thu, 4 Sep 2008 15:11:25 -0700
Message-ID: <a9c093440809041511s3eb66950o50998d65dcb1f070@mail.gmail.com>


On Thu, Sep 4, 2008 at 1:49 PM, Lou Avrami <avramil_at_concentric.net> wrote:
> I have inherited several Oracle 9.2.0.8 data warehouses on HP-UX 11.11 servers, each 2+ TBs. All of them are experiencing various performance issues.

What metrics do you have that points to I/O being an problem?

> The db_multiblock_read_count (MBRC) on these databases are explicitly set to the value of 8.

This might be a bit small for a warehouse, but it completely depends on the query workload. Do the majority of the queries use index access or table scans? Adjusting the MBRC will effect the costing of *all* plans, but it will only benefit the I/O sizes for those doing partition/index fast full/table scans. Be careful what you optimize for...if your storage and I/O channels can not handle more I/O from more FTS then there will likely be little benefit in adjusting it larger. In fact, if your I/O is bottlenecked, it may get worse.

> To try to get a better idea of what we might increase the MBRC to, I wanted to figure out the values for the formula:
> [stripe width] x [stripe size] / [db_block_size ]

This looks like a great equation but I don't think it is is what you want. Generally it is better to have the largest database I/O match the stripe depth/size (how much of the stripe is on a single physical disk) so that a single large (say 1MB) I/O is serviced by exactly one physical disk.

I would recommend seeing the Oracle docs: http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/iodesign.htm#i28412

The other thing to consider is all the different layers that change/chop I/O size. This includes kernel, I/O driver, storage array, raid controller, etc. The size I/O that you think you are sending may not be what the physical disk actually gets. This is only visible with storage array tools.

> I realize that perhaps the best way to determine the optimal MBRC value with these file systesm would be to use the dd command and write a large datafile with different block size options to the filesystems.

Actually the best way to determine it is to run some FTS and alter MBRC at the session level comparing the elapsed times. Just make sure that you are doing 100% physical I/O (no blocks from buffer cache or filesystem cache). Then you will see if it is worth your time. My guess is probably not.

I would recommend your spend your time looking at execution plans and determining if you have optimal partitioning. The fastest I/O is the one that never takes place (e.g. partition elimination). These will likely give you orders of magnitude gains vs. a few percentage points of gains tweaking MBRC. Go after the big fish!

-- 
Regards,
Greg Rahn
http://structureddata.org
--
http://www.freelists.org/webpage/oracle-l
Received on Thu Sep 04 2008 - 17:11:25 CDT

Original text of this message