Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB Buffer Cache Size

Re: DB Buffer Cache Size

From: omlet v4 <teraknowledgesystems_at_yahoo.com>
Date: 21 Aug 2004 12:47:16 -0700
Message-ID: <fc85c159.0408211147.6ffa9c3@posting.google.com>


don_at_burleson.cc (Don Burleson) wrote in message news:<998d28f7.0408210512.39a499ce_at_posting.google.com>...
> > > Absolutely. A cache has to be managed. Make it too big, and management
> > > becomes all you do, and never mind serving up data.
>
> As some would say, LET'S PROVE IT!
>
> For example, why do ALL the 10g "world record" benchmarks use over 100
> gig data caches? I worked with one of these TPCC benchmarks and ran
> repeatable timings. Up to the point where the working set was cached,
> the benefit of a larger data cache outweighed the LIO overhead.
>
> Also, using multiple blocksizes also helped greatly was the
> appropriate blocksize was used depending on the types of objects in
> the tablespaces.
>
> For example, small OLTP access rows like a 2k blocksize because you
> don't waste RAM hauling-in block space you don't need. A 32k
> tablespace for index range scans also showed a large, measurable
> performance improvement:
>
> But don't take my word for it, see for yourself:
>
> http://www.tpc.org/results/FDR/TPCC/HP%20Integrity%20rx5670%20Cluster%2064P_FDR.pdf
>
> db_cache_size = 4000M
> db_recycle_cache_size = 500M
> db_8k_cache_size = 200M
> db_16k_cache_size = 4056M
> db_2k_cache_size = 35430M
>
> > > Rather more technically, the bigger your cache, the larger your LRU and
> > > Dirty List becomes.
>
> True, but PIO is even more expensive! Again, see the Sun 10g
> benchmark:
>
> http://sun.systemnews.com/articles/69/2/Oracle/11510
>
> > Again, you are right on target. I suspect that one or more of my
> > colleagues is going to be called in to help this company tune their
> > database. When I proposed to them the possibility that TOO MUCH memory
> > was being allocated to the buffer cache, perhaps sacrificing some
> > efficiency in other arenas, I was treated somewhat like a pariah.
>
> The size of the buffer cache depends on the size of the "working set"
> of frequently-referenced data! A very large OLTP system might need a
> 60 gig KEEP pool:
>
> http://www.dba-oracle.com/art_dbazine_9i_multiblock.htm
>
> Also, took into the mechanism inside 10g AMM. It computes the
> marginal benefits of additional data block buffers based of the costs
> of less PIO and does indeed consider the costs of serialization
> overhead.

Chech the memory on the "Integrity rx5670 Cluster 64P": It has 800GB; does 0.5 M IO/sec sustained! "Anyone old enough to remember the TANDEM challenge?!" - without caching; you would need 10K spindles -- they are actually using 2500 spindles. That speeks volumes to the efficiency of the Oracle cache performance. Of course those 64 CPUs have about 400MB of L2 cache as well.

I would use OMLET to justify increasing cache buffers: Given you have the tpcc kind of load(1M users): if the logical I/O graph keeps increasing monotically but falls before your CPU utilization is 100%; then I would try bigger cache. Bigger cache would give you better logical IO rate till you have the working set in cache.

You can use Quest Spotlight for the same purpose! Good competitors create a market!

For something more scientific: there is a neural network software that once Oracle tried for predicting hit ratios given some access pattern.

Amjad Daoud Received on Sat Aug 21 2004 - 14:47:16 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US