Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB Buffer Cache Size

Re: DB Buffer Cache Size

From: Howard J. Rogers <hjr_at_dizwell.com>
Date: Sun, 22 Aug 2004 23:55:11 +1000
Message-ID: <412753cb$0$25143$afc38c87@news.optusnet.com.au>


Don Burleson wrote:

>> > Absolutely. A cache has to be managed. Make it too big, and management
>> > becomes all you do, and never mind serving up data.

>
> As some would say, LET'S PROVE IT!

Please do so. But don't quote TPC benchmarks to do it. They consist of 'Hang the expense - we want the performance numbers!' implementations. The real world consists rather more of having to strike a decent compromise between the best possible performance numbers and the biggest possible bank balance.

Your post seems to be advocating the same stuff as the old 'make the cache as big as possible until all required data is cached' school of thought used to do. Typified by frequent posts here at one time along the lines of 'how do I pin entire tables in memory'.

You dismiss the overhead involved in managing large caches as being cheaper than the physical I/O required as a result of a cache miss. You are entitled to do so, of course. But I think it "brave" to be so cavalier.

Maybe when you have 100GB of RAM to throw at your caches (and a disk subsystem to match) you can afford it. Most people can't.

HJR
>
> For example, why do ALL the 10g "world record" benchmarks use over 100
> gig data caches? I worked with one of these TPCC benchmarks and ran
> repeatable timings. Up to the point where the working set was cached,
> the benefit of a larger data cache outweighed the LIO overhead.
>
> Also, using multiple blocksizes also helped greatly was the
> appropriate blocksize was used depending on the types of objects in
> the tablespaces.
>
> For example, small OLTP access rows like a 2k blocksize because you
> don't waste RAM hauling-in block space you don't need. A 32k
> tablespace for index range scans also showed a large, measurable
> performance improvement:
>
> But don't take my word for it, see for yourself:
>
> http://www.tpc.org/results/FDR/TPCC/HP%20Integrity%20rx5670%20Cluster
2064P_FDR.pdf
>
> db_cache_size = 4000M
> db_recycle_cache_size = 500M
> db_8k_cache_size = 200M
> db_16k_cache_size = 4056M
> db_2k_cache_size = 35430M
>

>> > Rather more technically, the bigger your cache, the larger your LRU and
>> > Dirty List becomes.

>
> True, but PIO is even more expensive! Again, see the Sun 10g
> benchmark:
>
> http://sun.systemnews.com/articles/69/2/Oracle/11510
>

>> Again, you are right on target. I suspect that one or more of my >> colleagues is going to be Interestingly, the Windows and Linux 10.1.0.3 patches have been pulled from
metalink, no reason given at this stage.called in to help this company tune their
>> database.  When I proposed to them the possibility that TOO MUCH memory
>> was being allocated to the buffer cache, perhaps sacrificing some
>> efficiency in other arenas, I was treated somewhat like a pariah.

>
> The size of the buffer cache depends on the size of the "working set"
> of frequently-referenced data! A very large OLTP system might need a
> 60 gig KEEP pool:
>
> http://www.dba-oracle.com/art_dbazine_9i_multiblock.htm
>
> Also, took into the mechanism inside 10g AMM. It computes the
> marginal benefits of additional data block buffers based of the costs
> of less PIO and does indeed consider the costs of serialization
> overhead.
Received on Sun Aug 22 2004 - 08:55:11 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US