Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: switching to 'direct path read' should help prevent thrashing buffer cache ?

Re: switching to 'direct path read' should help prevent thrashing buffer cache ?

From: <p.santos000_at_gmail.com>
Date: 21 Dec 2006 09:21:12 -0800
Message-ID: <1166721672.265421.16120@79g2000cws.googlegroups.com>

Charles Hooper wrote:
> p.santos000_at_gmail.com wrote:
> > Folks,
> > Our environment is neither an OLTP nor a DSS system. We have
> > transactions that cross boundaries.
> >
> > Particularly we have these one of jobs that many of our customers run
> > at any time during the
> > day in which they chose to clear a table column.
> > Most clients clear anywhere from 1 million rows to 11 million rows
> > per job.
> >
> > The pseudo-code and SQL looks like this:
> >
> > while SQL%ROWCOUNT < 500,000
> > do
> > UPDATE DEMO_<CLIENT_ID>
> > set col1= null,
> > col2= null,
> > col3= null
> > WHERE ( col1 is not null OR
> > col2 is not null OR
> > col3 is not null ) AND ROWNUM <= 500,000;
> > commit;
> > done.
> >
> > We use a ROWNUM limit in order to prevent row blocking for other
> > processes that
> > might be processing single row dml against the table ..
> >
> > We have increased the efficiency of these processes .. making IO
> > faster and now customers
> > are just doing it more often. ... this obviously thrashes my buffer
> > cache.
> >
> > Nearly all updates spend most of their time waiting on 'db file
> > scattered read'.
> > We have db_file_multiblock_read_count = 128.
> >
> > Should also mention that this process connects via a shared/mts
> > connection... although
> > we can change that if needed.
> >
> > I'm thinking about having just this process run in parallel in order to
> > bypass the buffer cache because, I don't believe this process benefits
> > from caching and it causes blocks to age out faster for other clients
> > that are doing other things .. and do benefit from caching.
> >
> > My thought is that if I switch this to a dedicated connection and I
> > add a PARALLEL hint
> > ( even if it's just 2 parallel servers per job), the job will
> > complete faster, it will prevent my cache from being thrashed only at
> > the cost of more pga memory , and a little bit more io.
> >
> > I'm looking for the cons in doing something like this?

>

> I suspect that Joel Garry's comments are right on the mark. There is
> little doubt that the UPDATE statement is performing a full tablescan,
> reading 128 blocks at a time. The question is, what is the maximum IO
> size for the operating system that you are using? Is the operating
> system actually breaking the request for 128 blocks into 16 requests
> for 8 blocks each? The links that Joel posted seem to indicate that
> using a large db_file_multiblock_read_count may decrease performance.
> What if, Oracle only needed the contents of seven blocks? Is the value
> of db_file_multiblock_read_count forcing the operating system to
> perform 15 more read requests than required?
>

> http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm
> "Setting the value for DB_FILE_MULTIBLOCK_READ_COUNT initialization
> parameter too high which favors full scans..."
>

> http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/optimops.htm
> "A larger multiblock read count is likely to decrease the cost for a
> sort merge join in relation to a nested loop join. If a large number of
> sequential blocks can be read from disk in a single I/O, then an index
> on the inner table for the nested loop join is less likely to improve
> performance over a full table scan."
>

> Full table scans of large tables by default place the read blocks on
> the least recently used list in the buffer cache. A NOCHACHE hint can
> be added to the SQL statement to help enforce this rule for smaller
> tables.
> http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements006.htm#BABEIBIF
>

> A quick calculation using artifical numbers. Assume that the table has
> 11,000,000 rows, with an average row length of 200 bytes. Assuming
> that you have an 8KB block size, and that each block is roughly 80%
> used, (8192 * 0.80 = 6554 / 200) roughly 32 rows may be present in each
> block. 11,000,000 / 32 = 343,750 blocks = 343,750 * 8192 =
> 2,816,000,000 ~ 2.8GB of table contents to be read to process this
> UPDATE statement, assuming that the high water mark does not extend
> significantly higher than this number of blocks. These are of course
> very rough numbers. Shared/mts is designed to handle very short
> transactions, which likely excludes it from reading 2.8GB of table
> data, and possibly updating any indexes on the three columns.
>

> Have you performed a 10046 trace on this type of update to determine
> what Oracle may be doing? Other members of this groups will likely
> provide additional suggestions if you are able to provide additional
> details.
>

> Charles Hooper
> PC Support Specialist
> K&M Machine-Fabricating, Inc.

We want our system to favor full table scans because we have customer queries that can select any column with any predicate. Only batch lookups and a few other things use indexes to look up small pieces of these very large tables.

This access is faster than ever. Now although 128 is what is set, 128 is not always used. I've traced (10046) these processes many times and in some cases the 'db file scattered read' will read less blocks. The wait event shows that p3 value = 128, but in some cases it is less. I think the block size of the filesystem is 512 and our database block size is 8K.

Thanks for the feedback.
-peter Received on Thu Dec 21 2006 - 11:21:12 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US