Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: olog() generarting defunct processes.

Re: olog() generarting defunct processes.

From: Sunil <psunil_at_my-deja.com>
Date: 2000/07/11
Message-ID: <8kfs8o$2gq$1@nnrp1.deja.com>#1/1

Thank you Doug for detail information.

As you mentioned, child server process exits due to disconnect and then this problem occurs.
In my program, I establishes connection at startup, collects the data whenever I want, and keeps connection active till I close the main application. Sometimes my application is still running, there is active server process, but some defunct processes are there with ppid as my application.
I am not able to trace out, why my program is not using the same server process to get the data. It seems, it's creating new server process whenever required, (might be due to previous one becoming defunct) I wanted to know that is there anything else apart from disconnect, which is responsible to make server process defunct. Second thing I want to know, is it possible to clear all those defunct processes with using BEQUEATH_DETACH parameter, let's say by handling signals myself in my application program.

Thanks again.
Sunil.

In article <020c08c4.e87441b5_at_usw-ex0102-015.remarq.com>,   fro_doe <fro_doeNOfrSPAM_at_hotmail.com.invalid> wrote:
> When a connection is made to the database without going through
> the listener, the parent of the server process is the client
> program (e.g. sqlplus, a ProC program, exp, etc.). Once this
> child process exits (because of a database disconnect), it is
> removed from the process table only when its parent (the client
> process) issues a wait() to get the exit code of its child. If
> that wait() is never issued, the child process will remain
> defunct until the parent goes away, at which time the
> child's parent becomes init, which issues the wait().
>
> When setting bequeath_detach=yes, the Net8 code in the client
> program does a double fork to create the child server process,
> which disassociates the parent from the child. The first process
> which is forked creates a second process. The second process
> then execs the oracle image while the first process exits. So,
> the parent of the server process becomes init, which will always
> issue a wait() when one of its children exits.
>
> The communication mechanism between the client and server process
> remains the same (i.e. UNIX pipes), the only change is how the
> server process is created (two forks instead of one). On Solaris
> (using truss), HP (using tusc or trace), or AIX (using trace),
> you can see this difference in behavior by tracing the client
> process startup along with its connection to the database. You
> can even verify this by simply looking at the PPIDs in a ps
> listing.
>
> Doug.
>
> -----------------------------------------------------------
>
> Got questions? Get answers over the phone at Keen.com.
> Up to 100 minutes free!
> http://www.keen.com
>
>

Sent via Deja.com http://www.deja.com/
Before you buy. Received on Tue Jul 11 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US