Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: RAC is slower than non-RAC for batch job with lots of update/delete?

Re: RAC is slower than non-RAC for batch job with lots of update/delete?

From: Charles Schultz <>
Date: Thu, 18 Oct 2007 08:45:51 -0500
Message-ID: <>

Yes, pgs 236-239 really spell that out quite well, thanks. However, I am still unclear if you can explicitly remaster a resource. From the verbiage, it sounds like the answer is no. Is it a feasible workaround to dynamically reset _gc_affinity_limit to some low value (say, 1) at the beginning of the batch job, than reset it afterwards? Unfortunately, this would also cause havoc if anything else is running on both nodes. I see that the affinity time and _lm_dynamic_remastering are both static parameters, so it would be hard to play with them unless you bounce the database.

Aside from application partitioning, what is the best way to handle "sequential" jobs?

On 10/18/07, K Gopalakrishnan <> wrote:
> Chuck,
> > I have heard that the Oracle kernel will keep track of how many times a
> gc
> > request is made, and upon hitting a certain threshold, will remaster a
> > particular block (or table?) to the node making a majority of the
> requests.
> > Which makes me wonder. If your batch job is always connecting to a
> specific
> > node, could the blocks be mastered explicitly to that node? I ask that
> of
> > the list.
> Yes. The behavior is called Dynamic Resource Mastering. If one object
> (or set of objects) continuously accessed in one particular node the
> objects are mastered in that node. Check Chapter 11 (Global Resource
> Directory) of the 10g RAC Handbook..
> There used to be some issues in the past (related to node evictions
> during DRM) and we turn off the DRM if you are <
> -Gopal
> --
> Best Regards,
> K Gopalakrishnan
> Co-Author: Oracle Wait Interface, Oracle Press 2004
> Author: Oracle Database 10g RAC Handbook, Oracle Press 2006

Charles Schultz

Received on Thu Oct 18 2007 - 08:45:51 CDT

Original text of this message