Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB Buffer Cache Size

Re: DB Buffer Cache Size

From: Don Burleson <don_at_burleson.cc>
Date: 21 Aug 2004 06:12:20 -0700
Message-ID: <998d28f7.0408210512.39a499ce@posting.google.com>


> > Absolutely. A cache has to be managed. Make it too big, and management
> > becomes all you do, and never mind serving up data.

As some would say, LET'S PROVE IT!

For example, why do ALL the 10g "world record" benchmarks use over 100 gig data caches? I worked with one of these TPCC benchmarks and ran repeatable timings. Up to the point where the working set was cached, the benefit of a larger data cache outweighed the LIO overhead.

Also, using multiple blocksizes also helped greatly was the appropriate blocksize was used depending on the types of objects in the tablespaces.

For example, small OLTP access rows like a 2k blocksize because you don't waste RAM hauling-in block space you don't need. A 32k tablespace for index range scans also showed a large, measurable performance improvement:

But don't take my word for it, see for yourself:

http://www.tpc.org/results/FDR/TPCC/HP%20Integrity%20rx5670%20Cluster%2064P_FDR.pdf

db_cache_size = 4000M
db_recycle_cache_size = 500M
db_8k_cache_size = 200M
db_16k_cache_size = 4056M
db_2k_cache_size = 35430M

> > Rather more technically, the bigger your cache, the larger your LRU and
> > Dirty List becomes.

True, but PIO is even more expensive! Again, see the Sun 10g benchmark:

http://sun.systemnews.com/articles/69/2/Oracle/11510

> Again, you are right on target. I suspect that one or more of my
> colleagues is going to be called in to help this company tune their
> database. When I proposed to them the possibility that TOO MUCH memory
> was being allocated to the buffer cache, perhaps sacrificing some
> efficiency in other arenas, I was treated somewhat like a pariah.

The size of the buffer cache depends on the size of the "working set" of frequently-referenced data! A very large OLTP system might need a 60 gig KEEP pool:

http://www.dba-oracle.com/art_dbazine_9i_multiblock.htm

Also, took into the mechanism inside 10g AMM. It computes the marginal benefits of additional data block buffers based of the costs of less PIO and does indeed consider the costs of serialization overhead. Received on Sat Aug 21 2004 - 08:12:20 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US