Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Cached blocks and performance

Re: Cached blocks and performance

From: Jonathan Bliss <bliss_jonathan_at_hotmail.com>
Date: 30 Aug 2002 02:37:50 -0700
Message-ID: <ae530df0.0208300137.219a8e1c@posting.google.com>


Norman Dunbar <Norman.Dunbar_at_lfs.co.uk> wrote in message news:<E2F6A70FE45242488C865C3BC1245DA702935A68_at_lnewton.leeds.lfs.co.uk>...
> Oracle caches blocks in either the DB_BUFFER_CACHE or the KEEP_POOL or
> the RECYCLE_POOL (depending on the version and any options you specified
> when creating the object of course).
>
> So if a read requires a single row from a table, Oracle has to check if
> it is already in the cache and if so this counts as a logical read. If
> not, go to the disc and read one block Oracle block that is) into the
> cache (= one physical read) then read from the cache (= one logical
> read).
>
> Does this help ?
>
> Regards,
> Norman.

Thanks Norman,

The system I am working with (8.1.7.x mostly NT - Data Cleansing/Warehouse) was originally developed in Oracle 6 and I was originally an application specialist not a DBA. (These are my excuses and I am sticking to them)

There is no keep pool or recycle pool (yet) just the default pool. I have some candidate tables and indexes for the keep pool but one of the indexes attaches to a regularly modified reference table (it changes but doesn't get much bigger). If I understand the manual (and Jonathan Lewis' excellent book, and the current thread on caching) I would need to either make my keep pool larger than really needed or coalesce my index regularly to reduce the risk of it flushing warm blocks (blocks that are warm enough not to be flushed from my default pool and therefore the keep pool reducing performance).

This relates to my earlier question, basically I think performance is being adversely affected because sometimes I am accessing more blocks than I should have to, to get the rows I need back. Although my overall cache hit ratio is good (I've read the myths thread and know it isn't an indicator to be used on its own) some queries seem to generate excessive I/O activity, i.e. Physical reads much higher for one of two queries that have similar execution plans and similar rows returned. What forces a query to go to disc? As far as I can gather among other things
1 - Not enough cache.
2 - Sparse rows in objects (too large pct free, stale index leaf blocks, rows selected just happen to be spread across many blocks)

3 - Rows chained across blocks (too small pct free, regular updates)
4 - poor execution plans
5 - contention between users 

Which of these is true in my case, I think I have ruled out 1,4 and 5 could it be 2 or 3, how can I diagnose this, what is the cure?

Any thoughts welcome Received on Fri Aug 30 2002 - 04:37:50 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US