Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: switching to 'direct path read' should help prevent thrashing buffer cache ?

Re: switching to 'direct path read' should help prevent thrashing buffer cache ?

From: <p.santos000_at_gmail.com>
Date: 21 Dec 2006 09:13:01 -0800
Message-ID: <1166721181.717746.3140@48g2000cwx.googlegroups.com>

joel garry wrote:
> p.santos000_at_gmail.com wrote:
> > Folks,
> > Our environment is neither an OLTP nor a DSS system. We have
> > transactions that cross boundaries.
> >
> > Particularly we have these one of jobs that many of our customers run
> > at any time during the
> > day in which they chose to clear a table column.
> > Most clients clear anywhere from 1 million rows to 11 million rows
> > per job.
> >
> > The pseudo-code and SQL looks like this:
> >
> > while SQL%ROWCOUNT < 500,000
> > do
> > UPDATE DEMO_<CLIENT_ID>
> > set col1= null,
> > col2= null,
> > col3= null
> > WHERE ( col1 is not null OR
> > col2 is not null OR
> > col3 is not null ) AND ROWNUM <= 500,000;
> > commit;
> > done.
> >
> > We use a ROWNUM limit in order to prevent row blocking for other
> > processes that
> > might be processing single row dml against the table ..
> >
> > We have increased the efficiency of these processes .. making IO
> > faster and now customers
> > are just doing it more often. ... this obviously thrashes my buffer
> > cache.

>

> Please state explicitly how you are deciding this obvious thing.
>

> >
> > Nearly all updates spend most of their time waiting on 'db file
> > scattered read'.
> > We have db_file_multiblock_read_count = 128.
>

> See the discussion on oracle-l about this (
> http://www.freelists.org/archives/oracle-l/12-2006/msg00294.html ) -
> state your platform and versions. Also see metalink Note:232443.1 and
> http://jonathanlewis.wordpress.com/2006/11/27/event-10132/
>

> >
> > Should also mention that this process connects via a shared/mts
> > connection... although
> > we can change that if needed.
> >
> > I'm thinking about having just this process run in parallel in order to
> > bypass the buffer cache because, I don't believe this process benefits
> > from caching and it causes blocks to age out faster for other clients
> > that are doing other things .. and do benefit from caching.
> >
> > My thought is that if I switch this to a dedicated connection and I
> > add a PARALLEL hint
> > ( even if it's just 2 parallel servers per job), the job will
> > complete faster, it will prevent my cache from being thrashed only at
> > the cost of more pga memory , and a little bit more io.
> >
> > I'm looking for the cons in doing something like this?
>

> Well, only testing will tell, but sometimes you wind up spending more
> resources on putting the parallel's back together than if you hadn't
> used parallel. http://www.oracledoug.com/papers.html has a lot of
> interesting parallel discussion.
>

> Since you are worried about buffer thrashing, check out using keep and
> recycle buffer pools. See details in the performance tuning manual.
> (And more details on Jonathan Lewis' site).
>

> If you are not using dedicated connections to begin with, you can't
> complain about performance. Sharing servers is for scalability in the
> sense of large numbers of users, the docs are explicit that it is not
> for performance.
>

> jg
> --
> @home.com is bogus.
> http://www.securityfocus.com/archive/1/454965

My apologies for not show the version/environment information.

I'm on Solaris 5.8 (veritas filesystem) running 10.1.0.4. Our Buffer Cache is 3GB and most of these tables that are being updated have are anywhere from 1GB size to 10GB. The columns are sometimes collocated, and the updated columns are always varchar2(1). They are usually set to '1' or '0' and when the are cleared they are set to NULL. The reason I don't use the KEEP / RECYCLE pools is because I would have to do this for every customer table. The fact our application allows a customer do clear an entire column during business hours ..while possibly blocking any number of single record transactions should show you that this is that an ideal system .. but I just a dba :-(.

Also these updates run from a perl client using DBD::Oracle. So array processing or using BULK COLLECT is not possible and I'm sure it will not be faster.

My thought was configure the larger customers to use dedicated connections, and set the update to execute in parallel. When I've traced this (10046) the biggest wait event is 'direct path read' instead of 'db file scatttered read'.

I also agree that on small customer tables, the parallelization would be overhead.
My pga_aggregate_target is set to 1500M.

thanks for the feedback.
-peter Received on Thu Dec 21 2006 - 11:13:01 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US