Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB Buffer Cache Size

Re: DB Buffer Cache Size

From: Howard J. Rogers <hjr_at_dizwell.com>
Date: Wed, 25 Aug 2004 07:46:27 +1000
Message-ID: <412bb732$0$31477$afc38c87@news.optusnet.com.au>


Niall Litchfield wrote:

> I just don't buy that Oracle corp ever considered folk that buy, as
> opposed to write systems, when they came up with the perf tuning course. I
> may have entirely misread it, but "design" in the docs has always to me
> seemed to imply logical and physical design - this is (with the rather
> important caveat that generally you can add objects to a physical design -
> indexes, partitions, mviews etc) generally beyond folk who bought the
> product once, pay less than 10% of the suppliers yearly revenue in support
> costs, and have a cross platform supplier to deal with.

True to an extent. There's no doubt the course is heavily devloper-oriented. But that's probably because it's developers/designers that build 80% of poor performance into systems.

On the other hand, a DBA or manager needs to have yardsticks by which to judge or assess a product, even if there's nothing much beyond that he can do. When you are evaluating two potential competitor products, would it not be handy to be able to say to the vendors, "And how much use do you make of materialised views?" or "Why did you decide not to implement partitioning?"

Also, consider a true case. Shrink-wrapped app performing poorly. DBA puts in a proposal to purchase 8 more CPUs and 32GB extra RAM to deal with the problem. Manager of DBA baulks and calls for a second opinion. It is noted that one particular SQL statement is written so abysmally, it is doing tens of millions of physical I/Os and causing the problem for everyone else. It was therefore suggested a better-performing disk sub-system might be a better hardware purchase than more CPUs and RAM. It was also suggested that they might want to consider Data Guard, and run the killer SQL on the standby instance. Or that they go for a two-node one-active-instance RAC and use the spare node for the killer SQL.

They went for the better disk array in the end, and as far as I know their problems are behind them. The particular SQL statement is still a dog's dinner, but the disk I/O rates are such that they can cope. They also know that SQL statement is the main culprit, and therefore are more careful about when they take the options that cause it to be issued.

So no, they couldn't touch the code their application was issuing. The code was awful. But analysing it in precisely the way the Oracle course would have had you analysing it, the correct hardware purchase was made.

> The material
> assumes it is your app, with your well defined business goals, that you
> are designing. I don't live in that world. I live in the 'we bought this -
> but producing this report that we never tested is slower than a snail on
> dope' world.

I still don't know how in the world you would go about finding an appropriate workaround for such problems unless you identify them correctly in the first place. And the DAMICO/S approach gets you doing that. I'm sure there are other approaches you could take that would achieve the same thing too. But diagnosing shrink-wraps still requires an assessment of design and application coding, whatever order you do it in.

Regards
HJR Received on Tue Aug 24 2004 - 16:46:27 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US