Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Statspack report for you to look at

Re: Statspack report for you to look at

From: Charles Hooper <hooperc2000_at_yahoo.com>
Date: 22 Jan 2007 04:09:47 -0800
Message-ID: <1169467787.907073.36930@m58g2000cwm.googlegroups.com>


Martin T. wrote:
> Charles Hooper wrote:
> > I looked through the Statspack report, and made a couple observations.
> > People in this group who work with Statspack reports more frequently
> > than me will likely find different observations.
> >
> > One section of the report, titled "Buffer Pool Advisory for DB",
> > provides estimates for different buffer cache sizes. You currently
> > have 3,000 buffers allocated (3,000 * 8,192 ~ 25MB). If you were to
> > increase the buffer cache size to 10,000 buffers (roughly 83MB), you
> > would only decrease the number of reads by an estimated 6%. This
> > implies that increasing the buffer cache size may not be that helpful.
> >
> > It appears that the system is CPU bound, and not necessarily IO bound.
> > Note, the number of logical reads - this may be a sign of highly
> > inefficient SQL statements. Also note, that there is an average of
> > 170.8 round trips between the client and server per second, which may
> > indicate that the client is not making efficient multi-row requests to
> > the database. There are an average of 273.4 execute (SELECT, INSERT,
> > UPDATE, COMMIT, CONNECT, etc.) requests made by the clients per second.
> >
> > The "buffer is not pinned count" per second count is very high, and
> > that seems to conflict with the advice provided under the heading
> > "Buffer Pool Advisory for DB".
> >
> > "table fetch by rowid" (fetch by index) is 12,156.4 per second, while
> > "table scan rows gotten" is 78,663.2 per second. This seems to imply
> > that 84.5% of the 112,173,648 table rows read, were read by performing
> > full table scans. Your system seems to prefer full tablescans,
> > possibly due to lack of useful indexes with accurate statistics, too
> > high of a value for db_file_multiblock_read_count (you specified 16),
> > or using functions on indexed columns, or something else.
> >
> > It appears that there are an average of 11.4 commits per second, which
> > may be high.
> >
> > Is the whole system slow, or just one process in the system? If it is
> > just one process, isolate the statistics for one session using a 10046
> > trace at level 8 or 12. Take a look at the application code, if
> > available, to determine if SQL statements can be re-written into more
> > efficient forms.
> >
> > Charles Hooper
> > PC Support Specialist
> > K&M Machine-Fabricating, Inc.
>
> Thanks a lot you this input!
> I'll be looking into several issues over the next days and hopefully
> I'll get a solution.
>
> One thing though: We have 3 or 4 tables that only have about 4-10
> records where online-datasets are buffered and (very) frequently
> updated and read. Some of them do not have indexes on the lookup
> columns and I would guess that's where the high value of "table scan
> rows gotten" comes from. I'll double-check this and if I can get a
> better performance by throwing some indexes at them -- but does it make
> sense to have an index on a table so small?
>
> thanks again,
> br,
> Martin

You have told Oracle that it is able to read 16 blocks in a single IO, which applies to full table scans and full scans of indexes. Considering your block size, you have told Oracle that it can read 128KB in a single IO request. Each fetch by index lookup requires at least 3 IOs, and will retrieve only one row at a time, while a full table scan will retrieve as many rows as have been fit into the 16 blocks (very likely, all of the rows in each of the small tables). Thus, forcing Oracle to use an index on a small table can hurt performance.

If the source of most of the 112,173,648 table rows read is coming from the 3 or 4 tables with 4-10 rows each (maximum of 40 rows between all tables), that may be a sign of a significant problem. I noticed that most of the time was spent in recursive calls - triggers, PL/SQL procedures, space management, etc. It could be that the system is too modularized, thus forcing the same set of SQL statements to be repeatedly thrown at the database server, rather than caching the results of each SQL statement.

Charles Hooper
PC Support Specialist
K&M Machine-Fabricating, Inc. Received on Mon Jan 22 2007 - 06:09:47 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US