RE: PARALLEL_MAX_SERVERS value
Date: Wed, 26 Oct 2011 12:14:25 -0400
I tend to agree with Chris' analysis. (No surprise there - if you think Chris is wrong about something you're well advised to double check that you understood what he wrote!)
But I do want to point out the context of a key exception that causes my clients trouble from time to time:
Parallel servers, executions and the like are by design intended to use as
much of the machine's throughput as possible to solve the current problem at
hand as quickly as possible.
(That is not a bad thing, and when you want one monster query solved as soon as possible it is exactly what you want.)
But when you mix parallel (pretty much anything) with interactive users, it is up to you to figure out the best way in the context of your systems to reserve some throughput capacity that is available with minimal queuing time for those users.
Failing to do so injects high variability into interactive response times. Usually this frustrates functional service level objectives and irritates people.
Limiting not only PARALLEL_MAX_SERVERS, but also the number of tasks running concurrently versus workshifts when interactive users expect consistent response times to modest queries is one solution direction to this potential problem.
WHEN this is the case, and although the exact number 2 may vary depending on changes in the characteristics of I/O system response and CPU/core/socket relationships, "Batch Queue Management and the Magic of '2' - Cary Millsap" is useful reading.
With the introduction of SSD and the increasing complexity of CPU/core/sockets and memory cache levels and NUMA considerations your mileage may vary on the answer being exactly 2, but the underlying structure of how to think about this remains. (And the 2 is subject to measurement on your exact system.) If you measure 2, AND you have interactive users, then the onus is on you to figure out how much less than 2 to run to leave headroom enough to maintain consistent interactive response.
Please do not translate the above into a blanket recommendation to set PARALLEL_MAX_SERVERS to 2 or even to lower than Chris' recommended starting point.
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Christian Antognini
Sent: Wednesday, October 26, 2011 3:10 AM To: LS Cheng
Subject: Re: PARALLEL_MAX_SERVERS value
> But CPUx10 es definitely too many because IMHO a CPU, either thread or
> core should not handle more than 4 requests, 10 is a lot!
FWIW at page 495 of TOP I wrote "...a value of 8-10 times the number of cores (in other words, the value of the initialization parameter cpu_count) is a good starting point".
I generally do not see a problem with such "high" values because: - Slaves might be allocated to a specific execution but do nothing. - Parallel processing is, generally, I/O bound. Especially with current CPUs you hit the I/O limit way before the system is CPU bound. - Setting parallel_min_percent to values higher than the default (0) is in many cases not an option.
IMO "low" values are only good when you control what's going on and, therefore, you can avoid that there are too many downgrades.
In summary, I generally prefer to have a system where too many slaves are allocated as the opposite. It goes without saying that there are exceptions!
Troubleshooting Oracle Performance, Apress 2008 http://top.antognini.chReceived on Wed Oct 26 2011 - 11:14:25 CDT