Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: I/O and db_file_multiblock_read_count

RE: I/O and db_file_multiblock_read_count

From: Ric Van Dyke <>
Date: Sat, 9 Dec 2006 10:26:31 -0600
Message-ID: <>

Vlad I'm with you on this.  

The optimizer makes an assumption that the higher MBRC is the cheaper a full table scan will be. This assumption is not always true. Doing an STRACE of the process might show some clues, but Oracle makes a call out to the OS asking for that number of blocks, how the OS services that request is not entirely clear from just the 10046 trace. Like Vlad says, maybe at the OS level it's making several ready calls to get what was done in just one ready with the lower MBRC.  

Ric Van Dyke

Hotsos Enterprises

Hotsos Symposium March 4-8, 2007. Be there.

[] On Behalf Of Vlad Sadilovskiy Sent: Saturday, December 09, 2006 10:35 AM To:
Cc: Hemant K Chitale; oracle-l
Subject: Re: I/O and db_file_multiblock_read_count  

I don't understand why people insist on gathering system statistics as Hemant noted, instead of answering the question why higher MBRC parameter resulted in longer run. MBRC from system statistics is used solely for CBO purposes. The actual reading is done in accordance with db_file_multiblock_read_count.  

Under different initial settings of MBRC and just according to the presented portion of trace, gather_system_stats would produce different average multiblock read time: 16ms - 128 blocks (16/128=0.125ms/block); .977ms - 15.75 blocks (.977/15.75=0.062ms/block). For CBO in this circumstances the cost of scattered vs. single block read of same unit of data is .125/.062=2.01 times more. Hence, CBO will make less plans that require scattered reads when MBRC is set to 128.  

The interesting point though is the initial question. Why bigger I/O request makes SAN perform worse? You should trace down to number of system and actual physical reads. It could be that it needs to make as twice I/Os when MBRC is 128. Try gradually increasing MBRC from 16 to 128 to find the curve point.

On 12/9/06, Stefan Knecht <> wrote:

The thing is, when you collect system statistics (aka make use of cpu costing) - Oracle gathers stats about your systems' average multiblock read count. It then uses that information instead of whatever you might have set in your init.ora.

Therefore, his test may be wrong - IF he's got sysstats.


On 12/9/06, Hemant K Chitale < <> > wrote:

I don't undersetand Mladen's and Stefan's recommendations about running dbms_stats.gather_system_stats.

Kevin says that he deliberately ran a test to check the performance of multiblock reads.
He is not asking "how can I avoid multiblock reads ?". He is asking "why is the total time for 128-block reads not significantly lesser than the total time for 16-block reads ?".


>On 12/8/06, Kevin Lidh
><<mailto: <> > <> > wrote:
>I was reading an article about the appropriate setting for
>db_file_multiblock_read_count. I'm on a HP-UX 11.11 64-bit system
>with Oracle The original value was 16 and I bounced the
>database and ran a million record full-scan test (10046 trace) and
>then set the value to 128 (max value) and re-ran the test. Although
>I did have less calls to 'db file scattered read' (140 compared to
>1008), the time to get the same number of blocks was longer. A
>random example from the trace files looks like this:
>And I ran the test again with 16 after my 128 test and the results
>were similar to the first test. The cumulative times for the 'db
>file scattered read' was 1.3181s for the 16 and 2.5149s when it was
>128. We use a Hitachi SAN and I know it has caching but I would
>still think that the fewer, bigger requests would be faster. Is
>there a measurement or setting on the operating system or even SAN
>that I can look at to help determine the optimal setting?

Hemant K Chitale  

Received on Sat Dec 09 2006 - 10:26:31 CST

Original text of this message