Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: DB File Sequential Read Waits
"Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote in message news:<bsurb2$hbs$3$8302bc10_at_news.demon.co.uk>...
> Note in-line.
>
> --
> Regards
>
> Jonathan Lewis
> http://www.jlcomp.demon.co.uk
>
> The educated person is not the person
> who can answer the questions, but the
> person who can question the answers -- T. Schick Jr
>
>
> One-day tutorials:
> http://www.jlcomp.demon.co.uk/tutorial.html
>
>
> Three-day seminar:
> see http://www.jlcomp.demon.co.uk/seminar.html
> ____UK___November
>
>
> The Co-operative Oracle Users' FAQ
> http://www.jlcomp.demon.co.uk/faq/ind_faq.html
>
>
> "Noons" <wizofoz2k_at_yahoo.com.au.nospam> wrote in message
> news:3ff2e731$0$18390$afc38c87_at_news.optusnet.com.au...
> >
> > Jonathan might want to confirm this or not, but I reckon striping does
> > help mostly when you have to access consecutive (adjacent) blocks in a
> > single read request. Which is the case in FTS, or in data range scans
> > (if the data is clustered), or in index range scans.
> >
> > Single index access will gain little benefit from striping. You'd have
> > to simulate a very large user load to really see a benefit, I reckon.
> >
>
> You won't gain "directly" from striping. if all you do is
> single block access. But in cases like this, the purpose of
> striping is not to speed up the individual read, but to
> scatter the reads as evenly as possible across the
> available and avoid queues. It's a statistical game,
> of course, and you may get unlucky, but it does tend
> to avoid hot spots.
>
> In another post, the poster said there were 6 data devices
> and 8 index devices. I don't have the statspack I/O report
> any more, but for highly random accesses, I would guess
> that a relatively small number of the reads are going physical
> on the indexes, and most are going on the writes.
>
> The statspack also said 213 reads per second - which is about
> 35 per second per disc - which is poor but not a total surprise.
>
> Two strategies: first, spread the data and indexes across all 14
> discs if there are any indications that most of the I/O is on the
> data - this would drop you to 18 I/Os per disc if my assumption is
> correct, and this might help.
>
> Second: look at putting the indexes for the most popular access
> path into a very large KEEP pool and the table into a small
> RECYCLE pool. This is to protect the indexes, and stop their
> effectiveness (cache hit ratio) being eroded by all the table reads.
>
> The potential benefit of these moves, though, can only be
> confirmed by a careful check of the SQL, and its costs.
>
> The OP also mentions that all the SQL is efficient and
> using indexes - one guideline for precision is to compare
> buffer_gets with rows_processed. For a single-table access
> in this type of situation, I would be hoping for no more than
> 4 or 5 buffer_gets per row_processed, and no more than
> ONE disk_read per row processed.
Ran some SQLTRACE of queries to see that they were doing.
Steps I followed:
Shutdown the instance
Started Instance
Dedicated 2 CPU's to Oracle processes using psrset
statspack snap
run trace of sql
statpack snap
tkprof on trace file
statsrep
NOTE: I have not included the statspack. I know by shutting down the instance it is not totally realistic but given the radomness of the data I wanted to see what it does when forced to do disk IO.
What I am seeing is alot of buffers read for equal number of rows and in some cases more buffers than rows.
Could the global partition be the issue and when all customer tickets are retrieved it is hitting more than 1 partition which all reside under /u02?
Here are results:
#######GET LIMITED ticket info for just 1 ticket:########
SELECT T83.C1,C536870968,C1207712002,C536870917,C3,C536870960,C536870961,C536870971,C7,C536870955,C4,C536870963,T83.C1
FROM T83
WHERE (T83.C1 = '000000000375902') ORDER BY 1 ASC
call count cpu elapsed disk query current rows
Parse 1 0.13 0.12 2 197 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 5 5 0 1
total 4 0.13 0.12 7 202 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 31
Rows Row Source Operation
------- --------------------------------------------------- 1 TABLE ACCESS BY GLOBAL INDEX ROWID T83 PARTITION: ROW LOCATION ROW LOCATION 2 INDEX UNIQUE SCAN (object id 23615)
#####Get FEW Fields for All Tickets Opened for Customer#####
SELECT T83.C1,T83.C1,C536870960,C536870961,C536870971
FROM
T83 WHERE (T83.C1207712002 = '0200086451') ORDER BY 1 ASC
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 4 0.01 0.01 33 40 0 38
total 6 0.01 0.01 33 40 0 38
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 31
Rows Row Source Operation
------- --------------------------------------------------- 38 SORT ORDER BY 38 TABLE ACCESS BY GLOBAL INDEX ROWID T83 PARTITION: ROW LOCATION ROW LOCATION 39 INDEX RANGE SCAN (object id 26885)
##########Get All Fields for 1 Ticket#########
SELECT C1,C2,C3,C4,C6,C7,C8,0,C536870917,C536870918,C536870919,C536870920,C536870921, C536870922,C536870923,C536870924,C536870925,C536870926,C536870927,C536870928, C536870929,C536870930,C536870932,C536870933,C536870934,C536870935,C536870936,C536870937, C536870938,C536870939,C536870942,C536870943,C536870944,C536870945,C536870948,C536870949, C536870950,C536870951,C536870952,C536870953,C536870955,C536870956,C536870957,C536870958, C536870960,C536870961,C536870963,C536870966,C536870967,C536870968,C536870969,C536870970, C536870971,C536870972,C536870973,C536870974,C536870975,C536870976,C536870980,C536870981, TO_CHAR(C536870982,'FM99999999999999999999999990.009'),C536870983,C536870984,C536870985,C536870986,C536870987,C536870988,C536870989,C536870990,C536870991,C536870992,C536870993, C536870994,C536870995,C536870996,C536870997,C536870998,C536871001,C536871002,C600000001,
C1207710024,C1207711001,C1207711002,C1207711003,C1207711004,C1207711005,C1207711006,C1207711010, C1207711011,C1207711012,C1207711013,C1207711014,C1207711015,C1207711016,C1207711017,C1207711018, C1207711020,C1207711021,C1207711022,C1207711023,C1207711024,C1207711028,C1207711029,C1207711030, C1207711031,C1207712001,C1207712002,C1207729901,C1207729902,C1207729903,C1207741001, TO_CHAR(C1207751001,'FM99999999999999999999999990.009'),C1207761020,C1207769900,C5,C536870931,C536870947FROM T83
call count cpu elapsed disk query current rows
Parse 1 0.10 0.08 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 3 5 0 1
total 4 0.10 0.08 3 5 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 31
Rows Row Source Operation
------- --------------------------------------------------- 1 TABLE ACCESS BY GLOBAL INDEX ROWID T83 PARTITION: ROW LOCATION ROW LOCATION 1 INDEX UNIQUE SCAN (object id 23615)
Also out of curiosity I ran same test but against all 8 CPU's rather than a dedicated 2. What is of interest is that the total elapsed time was greater when using all 8 rather than 2:
##### 2 CPU's ######
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 12 0.37 0.27 2 222 0 0 Execute 13 0.00 0.02 0 0 0 0 Fetch 23 0.01 0.01 82 101 0 72
total 48 0.38 0.30 84 323 0 72
##### 8 CPU's #######
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 12 0.31 0.39 2 222 0 0 Execute 13 0.00 0.00 0 0 0 0 Fetch 23 0.05 0.56 82 101 0 72
total 48 0.36 0.95 84 323 0 72Received on Wed Dec 31 2003 - 13:54:09 CST