Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: DB File Sequential Read Waits

Re: DB File Sequential Read Waits

From: rjpfal <ralbertson_at_comcast.net>
Date: 31 Dec 2003 06:52:16 -0800
Message-ID: <2932a99f.0312310652.697f1677@posting.google.com>


"Jonathan Lewis" <jonathan_at_jlcomp.demon.co.uk> wrote in message news:<bssci4$qro$1$830fa78d_at_news.demon.co.uk>...

Jonathan, thank you for the info. I will do some more research on disk layout. I have been reviewing the SQL explain plans and they all look reasonably well tuned with use of indexes.

Somehow I have a bottleneck on the backend array. The admins did give me a breakdown of the hypervolumes (8G) and their distribution across the physical devices. At most the max hypervolumes on 1 disk is 1.

Here is the distribution using letters for physical drive and count of volumes per drive:

A - 2
B - 2
C - 1
D - 2
E - 1
F - 2
G - 1
H - 1
I - 2
J - 1
K - 2
L - 1
M - 2
N - 1
O - 1
P - 1
Q - 2
R - 1
S - 1
T - 1
U - 1
V - 1

Total HyperVolumes used 30 for 240G of space

With a 1M stripe the data has do be distrubuted across multiple physical drives.
The data volume is made up of 6 HyperVolumes and the Indexes are using 8 HyperVolumes.

Thanks again for the information.
> Ignore the minor details about control files,
> log files and parsing for the moment.
>
> In your one hour snapshot you've averaged
> 213 reads per second at an average wait
> time of 39 milliseconds, for a total of 8 hours
> of wait time (do you happen to have around
> 8 concurrent streams of queries, or far more)
> whilst using 18% of a CPU. The only problem
> worth addressing is your I/O problem. You have
> to reduce the total number of read requests, and
> find out if there is anything (other than your
> own disk layout and concurrency) that is
> causing the device bottleneck.
>
> Question 1 HAS to be:
> How many different physical devices are
> you hitting when you query your indexes
> and tables ? Make sure that you really
> have the striping that you think you have -
> maybe you have one disc for data, one
> for indexes, one for temp ... maybe your
> logical names have become physical devices !
> If you have N highly-concurrent users in this
> test, and it's a realistic test, I would aim to
> have N devices being hit simultaneously.
>
> Question 2:
> Given that you are hitting a very large partitioned
> table:
> a) Are you sure your test queries are realistic,
> or might you be hitting 'old' partitions very regularly
> that would not be hit very often in a production system.
>
> b) Are your queries supposed to achieve partition
> elimination, and are they achieving it. Failure to
> eliminate could be driving up the work load by a
> factor of N where N is the number of partitions
> in the table
>
> c) Are your access paths into the partitioned tables
> as efficient as they could be ? Are the indexes the
> correct ones, have they been padded with garbage,
> are they not precise enough. Scrape v$sql for a
> sample set of queries, and check how many rows
> are acquired from the table for each row returned
> to the end-user.
>
> You appear to be doing roughly one physical read for
> each row fetched by rowid, so I doubt if you are
> going to get much benefit from playing around with
> the KEEP and RECYCLE buffer pools.
>
>
> --
> Regards
>
> Jonathan Lewis
> http://www.jlcomp.demon.co.uk
>
> The educated person is not the person
> who can answer the questions, but the
> person who can question the answers -- T. Schick Jr
>
>
> One-day tutorials:
> http://www.jlcomp.demon.co.uk/tutorial.html
>
>
> Three-day seminar:
> see http://www.jlcomp.demon.co.uk/seminar.html
> ____UK___November
>
>
> The Co-operative Oracle Users' FAQ
> http://www.jlcomp.demon.co.uk/faq/ind_faq.html
>
>
> "rjpfal" <ralbertson_at_comcast.net> wrote in message
> news:2932a99f.0312290945.6fc14a6f_at_posting.google.com...
> > Anyone have suggestions on how do reduce the number of db file
> > sequential reads?
> > Also any suggestions on improving IO on an EMC array (concatenated
> > versus striped volumes)
> >
> > I have a CRM application running on Oracle 8.1.6.3 Enterprise Edition.
> > It is in archive log mode and is running in a Sun Solaris 2.8 machine
> > with 2 dedicated CPU's.
> >
> > I am in the process of testing a new disk configuration on the backend
> > of this server (EMC) I have the data index files assigned to
> > respective file systems /u02 and /u03. The breakdown of the physical
> > disk on the array is done in 8G chunks on 73G hardrives. The stripe
> > size across the disk is 1M.
> >
> > The test I am performing is strictly a read-only set against the DB
> > using most often used queries from the production database. A similar
> > test was run against the production DB during off hours.
> >
> > I am seeing no real improvement in the total IO time between the
> > production system test (concatenated volumes) and the test server
> > (striped volumes) both is amount of IO and the actual times.
> >
> > The application is totally random on its access to the data. There are
> > not really any hot blocks other than those used by a small set of
> > lookup tables.
> >
> > For the most part index scans are being utilized. Note
> > optimizer_index_cost_adj is set at 5. Also db_file_direct_io_count is
> > at 16. Do these not have opposing methods by the CBO, i.e. less costly
> > index versus full table scan.
> >
> > There is primarily 1 table that is accessed frequently and this table
> > is partitioned based upon each month. IO seems to be evenly
> > distributed.
> >
Received on Wed Dec 31 2003 - 08:52:16 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US