Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Database Hit Ratios

Re: Database Hit Ratios

From: Nuno Souto <nsouto_at_optushome.com.au>
Date: 9 Jul 2002 19:30:47 -0700
Message-ID: <dd5cc559.0207091830.5b455686@posting.google.com>


"Richard Foote" <richard.foote_at_bigpond.com> wrote in message news:<EUaW8.30337$Hj3.92142_at_newsfeeds.bigpond.com>...

>
> There's a fly buzzing around near you :)
>

Hey, in Australia that would be the norm, not the exception!

:-D

Coming back to hit ratios for a micro-second.

The problem with buffer hit ratios will always be the same. They provide an easy diagnostic, if that. They are a vague symptom. As a performance tuning tool, they are next to useless.

The thing is: if we have a very inefficient SQL that does a LOT of unnecessary I/O, WHERE IS THE PROBLEM?

Is it in the database buffers that are not large enough to convert all that physical I/O into logical I/O? Or is it in the SQL itself?

Is increasing the buffer hit ratio going to fix the problem or just move it somewhere else?

The approach followed by M$ and its "self-tuning" databases is that we should just add memory and get rid of the physical I/O. Stuff it if we get a trillion logical I/Os: CPU and memory are cheap compared to DBAs (as if DBAs had ANYTHING to do with this...).

The approach followed by anyone with a brain and absolutely minimal scientific skills is that we need to examine WHERE is the problem. Instead of fixing the (vague) symptom. This requires a small amount of grey matter exercise. Something sadly lacking in this day of "multiple-choice certifications".

The problem with M$'s (and nowadays Oracle's...) approach is that mathematics tells us logical errors in SQL derive into order of magnitude increases in demands for hardware resources.

Be they CPU, memory, disk, whatever.

Add to that the simple fact that data volumes have been growing by orders of magnitude in the last few years and a BIG problem emerges:

It does NOT matter how much hardware you throw at bad code. It will always consume ALL available resources and ask for more. Assuming no other bottlenecks.

Any doubts? Look at that XP box in front of you for confirmation... :-)

Using this approach with databases (for whatever reasons) is a bad idea. All it does is perpetuate bad code and make it all pervading. Instead of fixing the problem, we're trying to fix the symptom. Old thing really, but it keeps rearing its ugly head. People just forget about it.

I think Niemic's intention (from reading his books) is to fix the code. Like it should be. IMHO, he's using the wrong way to measure the problem and identify the fix. He's using an indirect symptom instead of real data. That's all.

Just my $0.02 anyways.
Cheers
Nuno Souto
nsouto_at_optushome.com.au Received on Tue Jul 09 2002 - 21:30:47 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US