Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Subject: RE: Tuning issue, 10046 trace and Scheduled Jobs

Re: Subject: RE: Tuning issue, 10046 trace and Scheduled Jobs

From: Jonathan Lewis <>
Date: Tue, 21 Nov 2006 20:48:11 -0000
Message-ID: <018101c70dae$5bbb2350$0300a8c0@Primary>

There is probably a change in execution plan, due to the switch from pga_aggregate_target to manual workareas.

When you generated the plans, did you try setting the workarea_size_policy, sort_area_size and hash_area_size to the batch values before using explain plan to see if things changed ?

Do you populate the temporary tables with realistic data before running the explain plan ? (Which might make a difference if you are 10g, or 9i with a non-default value for
dynamic sampling).

The extreme number of tablescans suggest a problem either with a non-mergeable view, or with a nested loop join where the second (inner) table is being scanned. Either problem could occur if the optimizer had produced an estimated cardinality of one row on a critical table - therefore allowing a 'for each row' approach to the next table to be a full scan.

A common example of this problem is a
NOT IN, or NOT EXISTS clause that
Oracle turns into an anti-join - using a nested loop anti-join when the human eye sees the obvious need for a hash anti-join.

Given you can get at v$mystat etc. at the end of the batch process - could you also get at v$sql_plan for the statement. Put an tag into to to make it easy to find.

select /*+ this_is_a_tag */ .....

select hash_value, child_number from v$sql where sql_text like '%this_is_a_tag%';

select ... from v$sql_plan
where hash_value = .. . and child_number = ...

Then you can find what actually happened.


Jonathan Lewis

Author: Cost Based Oracle: Fundamentals

The Co-operative Oracle Users' FAQ

> Taking your suggestion, I trapped the v$mystat data and compared them
> between the runs, seems when called by the schedule it does 30000 times more
> full scans (from 11 to 316,889) of the global temporary tables. Below is an
> extract of any stat with over a 200% or larger increase between runs. Any
> idea what could cause this. I Also noticed in the trace file it didn't show
> an explain for this insert as select even though it did for others, any way
> to force it to show it.
> Thanks much,
> Ken
> percent_diff
> number of map operations
> 73,554
> 33,125,668
> 33052114
> 449

Received on Tue Nov 21 2006 - 14:48:11 CST

Original text of this message