Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB File Sequential Read Waits

Re: DB File Sequential Read Waits

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Sun, 4 Jan 2004 10:42:08 -0000
Message-ID: <bt8qmj$j3n$1$8300dec7@news.demon.co.uk>

Notes inline.

-- 
Regards

Jonathan Lewis
http://www.jlcomp.demon.co.uk

  The educated person is not the person
  who can answer the questions, but the
  person who can question the answers -- T. Schick Jr


One-day tutorials:
http://www.jlcomp.demon.co.uk/tutorial.html


Three-day seminar:
see http://www.jlcomp.demon.co.uk/seminar.html
____UK___November


The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


"rjpfal" <ralbertson_at_comcast.net> wrote in message
news:2932a99f.0312311154.7117ef9b_at_posting.google.com...

>
> Ran some SQLTRACE of queries to see that they were doing.

> NOTE: I have not included the statspack.
> I know by shutting down the instance it is not totally realistic but
> given the radomness of the data I wanted to see what it does when
> forced to do disk IO.
>
> What I am seeing is alot of buffers read for equal number of rows and
> in some cases more buffers than rows.
>
This is reasonable - if you use a high-precision index to fetch N rows, then you do three or four buffer gets on the index, then one buffer get per row if the data is very scattered (i.e. assuming each row is in a separate data block). If the data is tightly clustered around the index key, then you might see 3 or 4 buffer gets to acquire a few dozen rows.
> Could the global partition be the issue and when all customer tickets
> are retrieved it is hitting more than 1 partition which all reside
> under /u02?
>
>

> Rows Row Source Operation
> ------- ---------------------------------------------------
> 1 TABLE ACCESS BY GLOBAL INDEX ROWID T83 PARTITION: ROW
> LOCATION ROW LOCATION
> 2 INDEX UNIQUE SCAN (object id 23615)
>
When you said you were parititioned, I assumed that most of your data access was going to be using locally partitioned indexes - which might have given some scope for reducing I/O and taking advantage of buffer pools.
> #####Get FEW Fields for All Tickets Opened for Customer#####
>
> SELECT T83.C1,T83.C1,C536870960,C536870961,C536870971
> FROM T83 WHERE (T83.C1207712002 = '0200086451') ORDER BY 1 ASC
>
> call count cpu elapsed disk query current
> rows
> ------- ------ -------- ---------- ---------- ---------- ----------
> Parse 1 0.00 0.00 0 0 0
> 0
> Execute 1 0.00 0.00 0 0 0
> 0
> Fetch 4 0.01 0.01 33 40 0
> 38
> ------- ------ -------- ---------- ---------- ---------- ----------
> ----------
> total 6 0.01 0.01 33 40 0
> 38
>
> Rows Row Source Operation
> ------- ---------------------------------------------------
> 38 SORT ORDER BY
> 38 TABLE ACCESS BY GLOBAL INDEX ROWID T83 PARTITION: ROW
> LOCATION ROW LOCATION
> 39 INDEX RANGE SCAN (object id 26885)
>
This query is good, and efficient - but has to do a lot of I/O simply because that's the way the data is. The plan shows 39 index entries returned, then 38 table rows returned - so you are not going to lots of excess table entries and discarding them. The 33 disc reads shows that on a couple of occasions you got a table row from a buffered block, but most of your customer's rows had to be read from individual blocks which had to be read from disc. There are three things I can think of that might help: a) Find queries where the number of disk reads is significantly higher than the number of rows returned, and check to see if the tkprof output shows the table returning far fewer rows than the index, where the disk figure in the tkprof output is roughly the same as the INDEX line rows. This indicates lots of table blocks being visited unnecessarily, and MIGHT be fixed by adding columns to indexes to avoid the excess table visits. b) See if there is any way of partitioning the data along the most important queries, with local indexes, IF it can get you some partition elimination c) See if you can cluster the data around the most important queries, so that you can get (in your example) 38 rows for one physical disk read. This might mean creating index organized tables with a PK using the customer id - which you could partition. It might mean re-creating the data as an index cluster on the customer_id - which would block any attempt at partitioning. The other thing that you really need to know before you set out on hard work, especially hard work which might not help, is to find out what is going on on the other 64G of disk where you have an 8G hypervolume. If other systems are thrashing the discs to death, then you may need to address their workload before you address your workload. It would be useful to find out how your tests go if every other system using the discs were quiesced. (Not that that is likely to be a realistic request).
> Out of curiosity I ran same test but against all 8 CPU's rather
> than a dedicated 2. What is of interest is that the total elapsed time
> was greater when using all 8 rather than 2:
>
Doesn't really mean much - but the more CPUs you have, the sooner any one process is going to go to disc (on average), so the more ferocious the contention is, and the longer the queue, at the discs. Get hold of Cary Milsap's book Optimising Oracle Performance for more information about the impact of concurrency and queueing.
Received on Sun Jan 04 2004 - 04:42:08 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US