Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: How to determine the sessions parameter in the pfile

Re: How to determine the sessions parameter in the pfile

From: Paul Drake <drak0nian_at_yahoo.com>
Date: 27 Jan 2004 22:59:38 -0800
Message-ID: <1ac7c7b3.0401272259.20742e60@posting.google.com>


Walt <walt_at_boatnerd.com.invalid> wrote in message news:<4016F475.FD20601D_at_boatnerd.com.invalid>...
> We just put an instance of Oracle 9.2 on W23k on line. So far, so
> good. It's connected to a webserver farm which uses connection pooling
> to increase or decrease the number of connections based on load.
> Unfortunately, at some peak load times the maximum number of sessions is
> reached and users get the "ORA-00020 : Maximum processes exceeeded" .
> During those periods, the oracle server doesn't seem to be stressed (low
> CPU usage, etc.) so I think the problem is just that the processes
> parameter is artificially low.
>
> So, how does one determine a good value for the processes parameter?
> I'd like to set it as high as the server will handle, but I'd rather not
> do this by trial and error.
>
> I've read elsewhere that you should budget 7Meg of memory per process.
> Is this still true in 9.2?
>
> And if it is, presumably I'd calculate a value for the processes
> parameter by dividing 7 into the total memory available. What should I
> use for total memory?
> o the total SGA
> o the max SGA
> o the total RAM of the server
> o something else?
>
> Thanks.

install statspack.
schedule snapshots to be taken (saw hourly) and take additional snapshots during periods of interest.
add your own scripts to the performance stats that are being monitored.
a simple one would spool a count from v$session, and a pslist -oracle to a file, along with the date/time.
this can assist you in numerous ways, but knowing the number of connected sessions at the time would seem to be important in your case.
I believe the default number of job_queue_processes in 9.2 is 10. If you aren't using lots of dbms_job processes, you can likely reduce that setting.
If you use pga_aggregate_target, then you are not allocating 7 MB per dedicated server process (which largely depends upon settings such as hash_area_size, sort_area_size). One could set the SGA to be a max size of say 1.5 GB and not have to worry about hitting the process limit.

I would look into if the middle tier app is properly closing connections.
It may be that it is leaving zombied processes that are not being terminated for quite some time. v$session.last_call_et is a good place to look.

enable auditing, and set the following:

SQL> audit session;

scan for events in the audit trail of "LOGOFF BY CLEANUP".

see, if you just hike the processes parameter, and the middle tier app is creating zombies, you are only attempting to put off the inevitable, which is exhausting of server resources.

now, if dead_connection_detection worked on 9.2.x ... you wouldn't have to write your own zombie-killer. It doesn't, you might.

Pd Received on Wed Jan 28 2004 - 00:59:38 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US