Date: Fri, 25 Apr 2008 13:41:01 -0400
I have an "Insert append nologging as select" query that joins a 71 million row table against a partition of a multi-billion row table which has 700 million rows which is directly specified using the extended syntax for partitions. Result set should be around 200 million rows. The explain plan is simple, full scan of the heap table and partition then a hash join of the two followed by a has group by and finally the insert. My test box is a sun box with 1 processor and 8 cores. I am explicitly issuing a parallel of 5 of each table (reduced form 10 each with same symptoms) and the full scans and hash join(s) complete within 1 hour based on v$session_longops. After 5 of slaves go to an inactive status and the other 5 have PX deq credit: send blk with a p1text/p2text/p3text of sleeptime/senderid, passes, qref. My parallel message size is 16k. The stats tab in toad shows PX:local messages received of just over 1 million and a slowly increasing PX:local messages sent of 600,000 an hour and a half after the hash join finished. Cpu utilization is currently at 2-3% 97% idle.I have searched through Metalink, google and have read all the papers on parallel execution (including Doug Burn's Suck it Dry) and am not sure where to go from here. Any ideas, theories, hints or links would be appreciated.
As a side note. I was shot down for sub-partitioning or repartitioning to a lower level by the application vendor as it would hurt the performance of their engine. Currently the table is partitioned by quarter based on sales date and partitions vary between 30 million and 700 million records for reasons too numerous to explain.
Ken NaimReceived on Fri Apr 25 2008 - 12:41:01 CDT