Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Cached blocks and performance

Re: Cached blocks and performance

From: Jonathan Bliss <bliss_jonathan_at_hotmail.com>
Date: Sun, 1 Sep 2002 19:42:11 +0100
Message-ID: <G6tc9.3229$fY.85349@newsfep2-gui>

"Norman Dunbar" <Norman.Dunbar_at_lfs.co.uk> wrote in message news:E2F6A70FE45242488C865C3BC1245DA70294B238_at_lnewton.leeds.lfs.co.uk...

> >> some queries seem to
> >> generate excessive I/O activity, i.e. Physical reads much higher for
> >> one of two queries that have similar execution plans and similar rows
> >> returned. What forces a query to go to disc?
 

> >> 2 - Sparse rows in objects (too large pct free, stale index leaf
> >> blocks, rows selected just happen to be spread across many blocks)
 

> >> 3 - Rows chained across blocks (too small pct free, regular updates)
 

> >> Which of these is true in my case, I think I have ruled out 1,4 and 5
> >> could it be 2 or 3, how can I diagnose this, what is the cure?
>
> Well, you'd go to disc any time the row you want is not in a block
> already in the cache. This means that either the block has never been
> read yet, or has aged out. In the case of a FTS, blocks age out quicker.
>
> If your query is forcing the use of an index, youll need cache space for
> these too. Again, use of an index needs to read the index root,
> intermediate and leaf blocks and then the data block. So if the query is
> hinted to use an index when a FTS would be better, then your going to
> hit the disc - if the cache is to small.
>
> If you have chaining then you could be using a block size that is too
> small. I'm on unix and we use 8K. I believe Howard has used 16K (and
> possibly 32 K) on Windows with no problems. More space to put data in
> bigger blocks as the percentage of the block not used by Oracle overhead
> is smaller. You should watch out for INITRANS and FREELISTS when using a
> bigger block. If you think you have chaining, analyse the tables (better
> to use DBMS_STATS) and check chain_cnt in user_tables for non-zero
> entries.
>

What I think is happening in this case is that the required rows are being read, and then revisited regularly as the process of interest is iterative. In some cases the cache is large enough that few of the blocks containing the rows of interest are being aged out, but in others many blocks end up needing to be re-read from disc. Sometimes the second will be because there are lots of rows, sometime the problem will be because of sparse rows. Addressing the user complaint 'That 25,000 record batch took me twice as long as the 50,000 record batch last week why?' Having digested the help from Sybrand, Richard and yourself I think that I have a way forward for checking and maybe some things I can do to make things behave consistently (sometimes I think consistent but badly would get fewer calls) .

Thanks again

Jonathan

ps I'm in Leeds as well so if we ever meet, I owe you a drink. Received on Sun Sep 01 2002 - 13:42:11 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US