Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: db_file_multiblock_read_count and performance - Bayesian Filter detected spam

RE: db_file_multiblock_read_count and performance - Bayesian Filter detected spam

From: Post, Ethan <Ethan.Post_at_ps.net>
Date: Wed, 8 Dec 2004 07:36:26 -0600
Message-ID: <83FCA77436D6A14883E132C63F4101D001DCCFC6@pscdalpexch50.perotsystems.net>


Putting multiple sessions and optimizer choices aside I would like to know exactly why (assume 128 is max) 128 faster than 1 but not faster than value < 128. Perhaps understanding the reasons for this are not exactly clear with the complexities of the IO subsystem. I agree the best method is likely to run some tests to see what is the faster method to access a row given a full scan then balance that with the # of users and influence things will have on the optimizer. I just don't want to be one of these people who think the sky is falling every time there is a full scan. I can't tell you how many people come to me demanding I somehow magically remove all full scans from some sort of query plan because "all full scans are bad".

-----Original Message-----
From: Christian Antognini [mailto:Christian.Antognini_at_trivadis.com]=20 Sent: Wednesday, December 08, 2004 1:35 AM To: Post, Ethan
Cc: Oracle-L_at_freelists.org; ryan_gaffuri_at_comcast.net Subject: RE: db_file_multiblock_read_count and performance - Bayesian Filter detected spam

Hi=20

(Sorry for the delay... but I'm offline by a customer...)

>This kind of brings up an interesting thought. There is the script on
>Ixora to test the largest MBR size and then you are suppose to set the
>value to that,

Setting it to the largest value is not good on all I/O sub-systems and sometimes very poor for the optimizer. Usually a value that gives good performance (let's say 90-95% of the maximum) lead to much better execution plans. Notice that if system statistics are used, then they automatically "compensate" large values with much better one (e.g. on a real system it happens not very often that you can read 50-60 contiguous blocks with a FTS...).

>maybe it would be a better practice to generate a huge
>table, run tests at different sizes then set.

For my tests I took a table of at least 1GB (on small systems) or 10GB (on big systems), i.e. I just reused a table with real data.

>In theory the largest
>size possible would be fastest but it would be interesting to find out
>if this always was true.

This is not true! You should really do some tests!!!!

Chris

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Dec 08 2004 - 07:37:53 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US