Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB Buffer Cache Size

Re: DB Buffer Cache Size

From: Howard J. Rogers <hjr_at_dizwell.com>
Date: Tue, 24 Aug 2004 05:03:42 +1000
Message-ID: <412a3f8d$0$6809$afc38c87@news.optusnet.com.au>


Niall Litchfield wrote:

>> > Like everything else in Oracle, it depends!
>> > 
>> > For reads, disk I/O is almost always shower then an LIO.
>> > 
>> > For writes, you are correct.  That's why many DBA's place high DML
>> > objects in a separate tablespace (with a different blocksize), and map
>> > it to a smaller buffer.
>> 
>> I wish you'd start qualifying that nugget of advice, too. You *can't*
>> (ie, shouldn't) utilise different blocksize tablespaces unless you've got
>> directio. Otherwise you are constrained in the matter.

>
> I think that I know what you mean - that irrespective of ypur block
> size you are dependent upon os read() calls which take place at OS
> block size - but worry a little bit about saying *can't* - I'd rather
> say that doing so gives no discernible benefit.

I did qualify the can't. I'd still prefer to leave it at that word rather than any other formulation. Because, you simply shouldn't. It's not a question of discerning the benefits, either. It's being able to measure the costs.

>> > Interesting point.  When I visit a client I usually find thousands of
>> > sub-optimal SQL statements that would take months to manually tune.
>> > To get a head-start, I tweak the instance parms to broad-brush tune as
>> > much as possible to the lowest common denominator.  Then I can sweep
>> > v$sql_plan and tune the exceptions.  Tuning the instance first saves
>> > buckets of manual tuning effort and lowering optimizer_index_cost_adj
>> > will sometimes remove sub-optimal full-tables scans for hundreds of
>> > statement in just a few minutes.  What's wrong with that?
>> 
>> What's wrong with it? Er, if you tune the instance first, you will
>> (perhaps) have configured lots of memory which wasn't ever actually
>> needed if only you'd sort the code out. If that involves a purchase of
>> $40,000, I would be a mite pissed off about that.
>> 
>> It's funny: the Oracle Performance Tuning course has for years been
>> drumming into people (correctly, IMO) the order of events: Design,
>> Application, Memory, I/O, Contention, Operating System. Any other order
>> of events is likely to result in 'loop tuning', where you fix problem A,
>> move onto problem B, and find that fixing problem B has re-blown problem
>> A.
>> 
>> No doubt it's lucrative for the consultant. But it's not efficient.

>
> I have major issues with this particular approach as a DBA. The
> problem with it is is not its correctness but the ability to deal with
> this when you cannot touch either the code or the design - i.e. when
> you bought the product. For a *lot* of products you can in fact touch
> the design or application, by working with the application, usually
> unfortunately you don't realize the significance of your application
> setup choices until you have made them.
>
> Its like the use bind variables mantra - great if I'm writing the sql,
> a complete bugger if all I can do is establish that the app doesn't
> use them.

No, that's not all you can do. You can measure the number of logical I/Os this shrink-wrapped product is making. You can measure the number of full-table scans. You can measure the number of hard parse calls. And then you can approach the vendor of the product with cold, hard facts and say, 'Do you know your product is s**t, and I've got the numbers to prove it... please explain (and fix) before I go buy a competing product'.

Then, of course, you've not got much else to do practically except dive in at the initialisation parameter stage (and start planning hardware purchases) -although you can (or might be able to) think about switching on cursor_sharing=something other than exact; or maybe consider moving tables around to eliminate an I/O contention issue; or maybe think of rebuilding some tables to ease the pain of FTS a little; or maybe think of partitioning some tables to minimise the pain of FTSes some more; or maybe retrofit your database with some materialised views, which don't require access to the application's SQL to get used, thanks to the magic of query rewrite.

But you can't do any of that meaningfully without already *knowing* that the problem is FTSes or lack of bind variables or poor I/O contention issues... and that all means measuring and tuning in the prescribed manner. It certainly doesn't mean breaking out the chequebook and hardware catalog first thing!

So, as far as D-A-M-I/O-C-O/S goes, it's still the sensible order of events, regardless of whether you can actually achieve very much by way of practical improvement at each and every step.

Regards
HJR Received on Mon Aug 23 2004 - 14:03:42 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US