Re: PeopleSoft query
Date: Tue, 14 Jul 2009 03:09:07 +0800
Thanks for the idea on analytical functions.. I agree with you (also
with the index scan on sub queries), the lesser/efficient the work the
faster it gets.
I'll explore on that and will discuss with the developer on how will this affect the functionality and any dependencies.
Yes, we've already set the two recommended underscore parameters
_UNNEST_SUBQUERY = FALSE We are running it with the 'update stats' option.
Thanks for the tip on the 'stored statement' for GPPDPRUN. I noticed on the other environment (HCPRD4) where we do our payroll test runs that it runs faster than prod (HCPRD), and the only difference is the join order.
Here's the most recent plan on the prod environment (HCPRD), this is slower and runs for about 18hours
HASH JOIN (29281638 rows)
TABLE ACCESS FULL PS_GP_PYE_SEG_STAT (5935691 rows) HASH JOIN (29681836 rows) TABLE ACCESS FULL PS_GP_PYE_STAT_WRK (22880 rows) TABLE ACCESS FULL PS_GP_PYE_PRC_STAT (5897449 rows)SORT AGGREGATE (the entire aggregate row source)
Here's the plan on the HCPRD4, which is much faster than HCPRD.. more than half the runtime..
HASH JOIN (29245093 rows)
TABLE ACCESS FULL PS_GP_PYE_STAT_WRK (23770 rows) HASH JOIN (5803496 rows) TABLE ACCESS FULL PS_GP_PYE_SEG_STAT (5904839 rows) TABLE ACCESS FULL PS_GP_PYE_PRC_STAT (5914296 rows)SORT AGGREGATE (the entire aggregate row source)
My observation here is the Hash Join operation in HCPRD4 is much faster because the row source output of PS_GP_PYE_STAT_WRK (23Thou) is Hash Joined to a Hash Join row source (5Million), the matching of keys is faster because there are lesser rows to scan through and there are few rows that needs to be filtered/thrased out. But still the entire Hash Join row source output (4th line) is 29Million.
The reason why the run in HCPRD is slower is because the row source output of PS_GP_PYE_SEG_STAT (5Million) is Hash Joined to a Hash Join row source (29Million), the matching of keys will be slower because there are more rows to scan through and there are more rows that needs to be filtered/thrased out. But still the entire Hash Join row source output (4th line) is 29Million.
It's the efficiency of the operation that made the difference even though you are getting the same output on both environments. So I could explore on the LEADING hint to make the join order efficient
I'll update you with your questions tomorrow :)
- How many employees are you paying per pay period. How many pay periods have you got?
- Are you running multiple concurrent payroll calculation processes?
Yes we have encountered a "MERGE JOIN CARTESIAN" on a particular SQL, and it made the payroll run for so long and we need to cancel it and re-run again. To permanently disable cartesian joins, we used the hidden parameter _OPTIMIZER_SORTMERGE_JOIN_ENABLED = FALSE
Regarding the *WRK tables, what is your OPTIMIZER_DYNAMIC_SAMPLING value when you locked the table statistics? is it 4 as David suggested?
I've also read your nice post on PeopleSoft http://dbasrus.blogspot.com/2007/09/more-on-peoplesoft.html http://dbasrus.blogspot.com/2007/09/one-for-peoplesoft-folks.html And found some references at Metalink3
- Karl Arao http://karlarao.wordpress.com
On Tue, Jul 14, 2009 at 12:33 AM, David Kurtz<info_at_go-faster.co.uk> wrote:
> Comments in-line
> And have you set _UNNEST_SUBQUERY = FALSE? PeopleSoft have recommended
> setting this parameter since Oracle 9i (when the default changed to TRUE).
>>[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Nuno Souto
>>- it does not matter when/how you analyze them, they will
>>NEVER have the correct statistics if the Peoplesoft code
>>itself doesn't do an ANALYZE after populating them during each run.
>>- which doesn't happen. Hence the problem.
> There is an option on the run control to do exactly this.
>>1- Get someone to recode the SQL to make it efficient. Most
>>of the PS code in paycalc is auto-generated by a very bad PS
> Actually the code for this particular statement is static. It is in the
> stored statement table.
> So you can change it.
>>2- Clear the stats on the most used *WRK tables then LOCK the
>>stats on the tables so they cannot be re-analyzed. Now,
>>enable optimizer_dynamic_sampling database-wide and make sure
>>the optimizer_mode is ALL_ROWS as recommended by Peoplesoft.
> On Oracle 10g, if you set OPTIMIZER_DYNAMIC_SAMPLING to 4 (the default is
> 2), then this is quite successful.
> However, if you do this, you can't collect stats in the program with the
> option on the run control, because using ANALYZE a table with locked stats
> raises an error. Unfortunately it doesn't use dbms_stats.