Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Bottleneck in 8i parallelism
Just had a re-read of your previous post. My comments about parallel 2 are probably wrong - I didn't look closely enough at your comment
>> >we are submitting 2 parallel tasks.
Do you mean by this the fact that you have set parallel_threads_per_cpu to 2 ? I was reading it as 'the tables are declared parallel 2.
Assuming my first interpretation was wrong, the presence of 8 CPUs and a value of 2 for this parameter, and the /*+ parallel(alias) */ hints with no explicit parallelism may mean that your query is running with 8 or 16 parallel query slaves. If each of these is running up a very large memory demand you may simply have a problem with swapping - this would perhaps agree with your observation that your internal discs are in 100% I/O wait.
To do:
Check the memory demand on each of the PQ slaves (top is a good bet here, or ps -efl to get the number of memory pages).
To check swapping activity I think one of the options to vmstat reports SWAP rates (as opposed to paging rates) - might be vmstat -S, vmstat -s.
Would you also post the SQL run by the slaves (the OTHER column in the plan_table), along with the id and node columns.
Are the tables partitioned ? If so, by what ?
Unless you have a good reason for setting such large sort_area_size and hash_area_size, I would reduce them -
sort_area_size rarely helps about about 10M hash_area_size can sometimes be very helpful when large. but I wouldn't go about 32Mb unless I had first proved a need.
I suspect that your shared_pool_size is a bit big for a data warehouse - how many pl/sql packages do you use, and how many different SQL statements ?
Ditto, I suspect your large pool is significantly bigger than needed. (You're not running multi-threaded servers on a data warehouse are you ?)
--
Jonathan Lewis
Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk
aozocak_at_my-deja.com wrote in message <810s34$kao$1_at_nnrp1.deja.com>... Received on Sat Nov 20 1999 - 04:52:35 CST