Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Maximum Db_cache_size?

Re: Maximum Db_cache_size?

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Tue, 20 Jun 2006 08:04:28 +0100
Message-ID: <00c001c69437$c5dd13b0$0300a8c0@Primary>

A couple of algorithmic issues with buffer cache size -

A "small" table is loaded at the mid-point when you do a tablescan. "small" is defined as 2% of the buffer cache. So you can be unlucky and find that increasing the cache dramatically just happens to cause a lot more tablescans to enter at the mid-point, thus knocking a lot of useful blocks out of the cache prematurely.

When there are a lot of free buffers in the buffer cache, then they are preferentially used for creating CR copies of datablocks. In theory there is a limit of 6 (_db_block_max_cr_dba) CR copies of a block - Oracle 9 seems to be better at sticking to this than earlier copies (possibly as a side-effect of other changes), but you can get lots more than 6. When this happens, you can get extreme 'cache buffers chains latch' contention against blocks that have large numbers of copies. Creating an unnecessarily large buffer cache may leave you in a state where you frequently end up with excessive copies of popular blocks.

Until 10g, actions like shrink segment, drop table, truncate table, require a search of the buffer cache to mark buffers as free. The time to clear ought to be related to the number of blocks in the object (in 8i and 9i) but if you have a large cache with lots of CR clones, the impact of regular drops etc. could be exaggerated.

On the flip side - there is a bug in 8i and 9i relating to the touch count. When reading blocks that have been loaded by tablescan (or index fast full scan) Oracle 'forgets' to increment the touch count. This means that even "small" tables fall out of the buffer cache fairly promptly - no matter how popular they are. Consequently creating a KEEP cache large enough for all the tables that are frequently scanned and allocating those tables to the KEEP cache can reduce the I/O component - especially if you are bypassing the filesystem. (Bug is fixed in 10g).

Regards

Jonathan Lewis
http://www.oracle.com/technology/community/oracle_ace/ace1.html#lewis

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html

Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

Subject: Maximum Db_cache_size?
Date: Mon, 19 Jun 2006 10:43:37 -0000
From: "Laimutis Nedzinskas" <Laimutis.Nedzinskas_at_landsbanki.is>

Is it good or bad to use all available (not used for other purposes, eg. shared pool or pga) RAM for buffer cache? I am talking about 32GB RAM range.

Oracle is not in-memory database but anyway - memory access is faster than disk access.
However I understand that the way(algorithms) Oracle uses RAM may have a practical turning point when adding more RAM will only slow down things.

Therefor the question is:

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Jun 20 2006 - 02:04:28 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US