RE: db_file_multiblock_read_count 10g default values

From: Allen, Brandon <Brandon.Allen_at_OneNeck.com>
Date: Thu, 5 Nov 2009 15:18:23 -0700
Message-ID: <64BAF54438380142A0BF94A23224A31E112E739FFA_at_ONEWS06.oneneck.corp>



I just did some more testing and you were right about the it being based on the processes parameter too as you can see in the table below with __db_cache_size held consant at 200M (and sga_target=400M), so my earlier formula only holds true when processes <= 300, but once you get above that, then db_file_multiblock_read_count is gradually scaled back, so I don't know the exact formula, but I think we can definitely conclude that db_file_multiblock_read_count is directly related to db_cache_size and inversely related to processes.

Regards,
Brandon

processes       __db_cache_size dbfmbrc dbfmbrc*db_block_size(8k)       dbfmbrc*db_block_size/db_cache_size

--------- --------------- ------- ------------------------- -----------------------------------
50 201326592 70 573440 0.0028 100 201326592 70 573440 0.0028 200 201326592 70 573440 0.0028 300 201326592 70 573440 0.0028 400 201326592 53 434176 0.0022 500 201326592 42 344064 0.0017 600 201326592 35 286720 0.0014 700 201326592 30 245760 0.0012 800 201326592 26 212992 0.0011 1000 201326592 21 172032 0.0009 2000 201326592 10 81920 0.0004

-----Original Message-----

From: Greg Rahn [mailto:greg_at_structureddata.org]

The choice for db_file_multiblock_read_count when not set is dependent on some other parameters (IIRC buffer cache size, sessions/processes, likely some others)

Privileged/Confidential Information may be contained in this message or attachments hereto. Please advise immediately if you or your employer do not consent to Internet email for messages of this kind. Opinions, conclusions and other information in this message that do not relate to the official business of this company shall be understood as neither given nor endorsed by it.
--

http://www.freelists.org/webpage/oracle-l Received on Thu Nov 05 2009 - 16:18:23 CST

Original text of this message