Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB Buffer Cache Size

Re: DB Buffer Cache Size

From: Howard J. Rogers <hjr_at_dizwell.com>
Date: Mon, 23 Aug 2004 05:19:27 +1000
Message-ID: <4128f1c0$0$3928$afc38c87@news.optusnet.com.au>


Don Burleson wrote:

>
> Like everything else in Oracle, it depends!
>
> For reads, disk I/O is almost always shower then an LIO.
>
> For writes, you are correct. That's why many DBA's place high DML
> objects in a separate tablespace (with a different blocksize), and map
> it to a smaller buffer.

I wish you'd start qualifying that nugget of advice, too. You *can't* (ie, shouldn't) utilise different blocksize tablespaces unless you've got directio. Otherwise you are constrained in the matter.

>> If you buy me more CPUs and better hard disk arrays, as well as more RAM,
>> I can probably push my caches bigger than before and not notice a cost
>> being incurred. But if you just keep buying more and more RAM, and making
>> your caches bigger and bigger, then you are going to overwhelm your
>> existing CPUs with memory management issues, and kill DBWn with a
>> workload he can't keep up with.

>
> Yes for DML-intensive databases. But not for DSS, OLAP and DW
> systems!

Then you need to qualify the systems for which the 'add a few noughts to the db_cache_size parameter' school of tuning applies.

>> But I won't even accept your statement at face value, either. To say that
>> increasing the buffer cache reduces physical disk I/O is simply another,
>> slightly fancier, way of saying 'tune by getting a good buffer cache hit
>> ratio'. Now the buffer cache hit ratio is known to be a lousy way to tune
>> the buffer cache, and has been done to death here and elsewhere many,
>> many times. Dress it up anyway you like, but your formulation simply
>> seeks to re-instate it as being of some merit.

>
> The DBHR is a metric that measures the probability that a block will
> be in the buffer on re-read, nothing more.

Good to hear it. It's not what a large number of DBAs have been taught over a lengthy period of time.

> Howard, what do you think about the AMM method for predicting when to
> add more buffers? It's kinda like v$db_cache_advice.

V$db_cache_advice is better than a kick in the teeth, and when you otherwise haven't a clue. There's not a single advisory which provides a magic fix, however.

>> Do you honestly think that cache tuning comes down to making things as
>> big as possible? If it were so, how in heaven's name has an entire
>> industry been beguiled into paying considerable consulting fees to so
>> many 'experts' for so long? Instead of shelling out a couple of grand to
>> get Don in to tune for a day, why don't I just add a few noughts to the
>> DB_CACHE_SIZE setting for free??

>
> Interesting point. When I visit a client I usually find thousands of
> sub-optimal SQL statements that would take months to manually tune.
> To get a head-start, I tweak the instance parms to broad-brush tune as
> much as possible to the lowest common denominator. Then I can sweep
> v$sql_plan and tune the exceptions. Tuning the instance first saves
> buckets of manual tuning effort and lowering optimizer_index_cost_adj
> will sometimes remove sub-optimal full-tables scans for hundreds of
> statement in just a few minutes. What's wrong with that?

What's wrong with it? Er, if you tune the instance first, you will (perhaps) have configured lots of memory which wasn't ever actually needed if only you'd sort the code out. If that involves a purchase of $40,000, I would be a mite pissed off about that.

It's funny: the Oracle Performance Tuning course has for years been drumming into people (correctly, IMO) the order of events: Design, Application, Memory, I/O, Contention, Operating System. Any other order of events is likely to result in 'loop tuning', where you fix problem A, move onto problem B, and find that fixing problem B has re-blown problem A.

No doubt it's lucrative for the consultant. But it's not efficient.

>> The reason we have experts is because the problem is a complex one. And
>> no, just throwing more memory at the problem does NOT make it go away.

>
> Hmmm, consider this. I had a client last month will a REALLY
> messed-up database and it was HEAVILY I/O bound (85% read waits).
> Instead of charging $100k to fix the mess I replaced the disks to
> high-speed RAM-SAN (solid-state disk)for only $40k. The system ran
> 15x faster, in less than 24 hours, and the client was VERY happy.
> Granted, it's not elegant, but hey, why not throw hardware at lousy
> code if it's cheap and fast?

Because he's now just bought hardware he doesn't actually need. I wish I could be so cavalier with other people's money...

I'm surprised you let your trade secrets out this readily, however. No-one need ever call you in again. Much cheaper and quicker to call the hardware vendor in.

>> People talk a lot of nonsense quite often, don't they?

>
> Like it or not, disk will soon be as obsolete as drums! I remember
> when 1.2 gig of disk costs $250k and I can now get 100 gig of SSD for
> only $110k.
>
> I have several fully-cached clients (some using SSD and other with a
> huge KEEP pool), and it runs great. . .
>
> I hear rumors that Redwood is working on a solid-state Oracle in a
> future release where the caches will disappear complete, so let's
> enjoy this discussion while we still can!

I think we're addressing different audiences, Don. You seem to have the money-no-object end of town all sewn up. I'll stick with the ordinary folk.

> BTW, how as your visit to the USA?

Wonderful, actually. Highlight (and surprise) of the trip: the Carolinas. Everything else was magnificent, too, of course. But they were particularly delightful.

HJR Received on Sun Aug 22 2004 - 14:19:27 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US