Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: I/O and db_file_multiblock_read_count

RE: I/O and db_file_multiblock_read_count

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Mon, 11 Dec 2006 17:23:30 -0500
Message-ID: <FBEIIHEAOIFCBBNIIFOGOEHACGAA.mwf@rsiz.com>


So it appears that something on the OS side of things IS actually penalizing you for fewer, bigger drinks.

You're getting about just a shade over 1 millisecond service time per record at 131072 (clearly from some layer of cache unless you have solid state disks) while you're limping along at almost 25 miliseconds to get a record only 8 times as big.

Now if the data were not already in some layer of cache I'd be thinking you were paying double seeks and hitting two different stripes to get those 1 MB chunks. But since it *seems* from the first test the data *IS* in cache, that makes no sense unless there is a *feature* somewhere in the stack of your disk farm access that makes a presumption it will need to go to disk for drinks larger than *x* bytes, where *x* is some number bigger than 131072 and less than 1048576.

Humor me and try half a million. If that is fast, try 1,000,000. (Yeah, I know, not a power of two, but we don't know whether the person who configured the disk farm like decimal numbers.)

If that is slow, try 262144.

In any case it seems like Oracle is off the hook on this one, and you don't even need to go to the fancier broken out reading of chunks more precisely as Oracle would in the scan.

Whether your SAN can be tamed to give you fast response for the bigger drinks you would like is very tightly bound to the exact hardware and software you have and how it is configured. If you solve that though, it is very likely the response from within Oracle will be what you expect too.

Regards,

mwf

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org]On Behalf Of Kevin Lidh
Sent: Monday, December 11, 2006 9:39 AM
To: Kevin Closson
Cc: oracle-l
Subject: RE: I/O and db_file_multiblock_read_count

I apologize for the tardiness of my response. This is the result of timing the dd. It seems to indicate the same thing as the scattered read test. Or am I reading that wrong?

(8k * 16)
Unix> time dd if=/ora/data/bordcdw/users/users01.dbf of=/dev/null bs=131072
1290+1 records in
1290+1 records out

real        1.4
user        0.0
sys         0.5

(8k * 128)
Unix> time dd if=/ora/data/bordcdw/users/users01.dbf of=/dev/null bs=1048576
161+1 records in
161+1 records out

real        4.0
user        0.0
sys         0.0


On Fri, 2006-12-08 at 15:48 -0800, Kevin Closson wrote:
> do the "same" thing using dd. Whatever your db_block_size is, plug it
> in as follows:
>
> $ time dd if=<datafile_for_the tablespace> of=-/dev/null
> bs=<block_size_in_bytes*16>
>
> then re-run:
> $ time dd if=<datafile_for_the tablespace> of=-/dev/null
> bs=<block_size_in_bytes*128>
>
> please let me know what you find
>
>
>
>
> ______________________________________________________________
> From: oracle-l-bounce_at_freelists.org
> [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Kevin Lidh
> Sent: Friday, December 08, 2006 12:46 PM
> To: oracle-l
> Subject: I/O and db_file_multiblock_read_count
>
>
>
> I was reading an article about the appropriate setting for
> db_file_multiblock_read_count. I'm on a HP-UX 11.11 64-bit
> system with Oracle 9.2.0.7.0. The original value was 16 and I
> bounced the database and ran a million record full-scan test
> (10046 trace) and then set the value to 128 (max value) and
> re-ran the

--
http://www.freelists.org/webpage/oracle-l



--
http://www.freelists.org/webpage/oracle-l
Received on Mon Dec 11 2006 - 16:23:30 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US