Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: olog() generarting defunct processes.

Re: olog() generarting defunct processes.

From: fro_doe <fro_doeNOfrSPAM_at_hotmail.com.invalid>
Date: 2000/07/11
Message-ID: <020c08c4.e87441b5@usw-ex0102-015.remarq.com>#1/1

When a connection is made to the database without going through the listener, the parent of the server process is the client program (e.g. sqlplus, a ProC program, exp, etc.). Once this child process exits (because of a database disconnect), it is removed from the process table only when its parent (the client process) issues a wait() to get the exit code of its child. If that wait() is never issued, the child process will remain defunct until the parent goes away, at which time the child's parent becomes init, which issues the wait().

When setting bequeath_detach=yes, the Net8 code in the client program does a double fork to create the child server process, which disassociates the parent from the child. The first process which is forked creates a second process. The second process then execs the oracle image while the first process exits. So, the parent of the server process becomes init, which will always issue a wait() when one of its children exits.

The communication mechanism between the client and server process remains the same (i.e. UNIX pipes), the only change is how the server process is created (two forks instead of one). On Solaris (using truss), HP (using tusc or trace), or AIX (using trace), you can see this difference in behavior by tracing the client process startup along with its connection to the database. You can even verify this by simply looking at the PPIDs in a ps listing.

Doug.


Got questions? Get answers over the phone at Keen.com. Up to 100 minutes free!
http://www.keen.com Received on Tue Jul 11 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US