Re: using pga_aggregate_target to control total memory usage caused by excessive connection.
Date: Tue, 7 Oct 2008 11:43:25 -0400
If you carefully read the reference:
Oracle attempts to keep the amount of private memory below the target specified by this parameter by adapting the size of the work areas to private memory. When increasing the value of this parameter, you indirectly increase the memory allotted to work areas. Consequently, more memory-intensive operations are able to run fully in memory and less will work their way over to disk.
Attempt != guarantee as I see it. I saw cases when effective PGA was
significantly over allocated comparing to the PGA_AGGREATE_TARGET set. If my
memory serves me right it was related to parallel execution and non properly
configured number of parallel slaves.
The goal is to find a balance, between application connection pool/requirements, memory settings and available HW. It anything goes havoc, no matter how you tune rest of the system it won't perform.
On Tue, Oct 7, 2008 at 11:06 AM, Zhu,Chao <zhuchao_at_gmail.com> wrote:
> hi, everyone,
> I think many people using this parameter instead of very old
> *_area_size these days and it just works well for most sites;
> We are running into a different problem. With more and more traffic/SQL
> execution, we are having more and more connections to the database.
> Excessive session# is leading to excessive total PGA usage; and we have to
> continuously reduce the SGA to accomedate the increasing PGA size, or
> convert to MTS (but MTS comes with very high CPU overhead with high
> connection# too).
> Just wondering whether it is helpful to use this parameter to control
> the total PGA caused by the excessive connection. This parameter helps make
> oracle distribute the tunable pga size between processes. But I am not sure
> how to estimate the tuneable/un-tunable pga size?
> v$pgastat/sum(v$process.pga_max_size) is showing 5gb - 7gb pga usage but os
> stats shows around 10gb-15gb pga usage;
> Our platform is solaris 10/oracle 10.2 on Sparc.
> any comment is welcome. thx.
> Zhu Chao