Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: the ways of buffer hit ratios are mysterious

Re: the ways of buffer hit ratios are mysterious

From: <markp7832_at_my-deja.com>
Date: Fri, 17 Dec 1999 17:53:30 GMT
Message-ID: <83dtao$2ed$1@nnrp1.deja.com>


In article <83dr65$nn$1_at_nnrp1.deja.com>,   attwoody <attwoody_at_my-deja.com> wrote:
> Greetings, all
>
> I have some twelve (12) Oracle books, most of them dealing with
tuning.
>
> In all of them, there are discussions about the Buffer Cache Hit
Ratio,
> with the discussion being, "Thou shalt increase thy DB_BLOCK_BUFFERS
in
> order to increase thy hit ratio"
>
> I've reorganized the database, increased DB_BLOCK_BUFFERS,
> rebuilt indexes, added indexes, etc, etc, etc. I've gone into the
> x$tables, I've set DB_BLOCK_LRU_EXTENDED_STATISTICS to see the
> effects of subtracting or adding buffers.
>
> It seems no matter how much I tweak and tune - some days, the hit
> ratio is in the 90's, some days it's in the 70's.
>
> Since the application is PeopleSoft 6, there's not a lot I can do
> to the code.
>
> It occurred to me that PeopleSoft has a lot of little tables that
> are used to retrieve data, and that a lot of table scans are being
> done - could this be contributing to what I'm seeing with the hit
> ratio? Would it be better to cache these small, frequently used
> tables?
>
> Another application I support called Glovia, has _excellent_
> performance statistics - but Glovia is _specifically_ written for
> Oracle - it has lots of stored procedures and packages, and takes
> advantage of Oracle performance enhancements. PeopleSoft's standard
> answer is "we want to make everything generic so it can run on
> any database". Well, it makes _my_ life more difficult!
>

The buffer pool hit ratio is a meaningless statistic by itself. A hit ratio of 80 is a good ratio if performance is good and 95 is bad if performance is poor. You can have a high hit ratio and lousy performance when you have a lot of reads to dummy tables like sys.dual and repeated index access that should be replaced with a single full table scan and a hash join. Data extraction jobs are often best run using one full table scan of the target table, but these kind of jobs have lousy hit ratios.

Since your ratio seems to jump around the change in the statistics could be related to specific processes that run on-demand, weekly, etc... You might want to try tracking the ratio say every 15 minutes for a couple on months and look for patterns that you can then trace to jobs. Perhaps some of the jobs can be tuned and then maybe not, but knowing why the ratio changes gives you the ability explain your system to others (managers, consultants, ...)

Just my opinion.
--
Mark D. Powell -- The only advice that counts is the advice that  you follow so follow your own advice --

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Fri Dec 17 1999 - 11:53:30 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US