Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Session does not terminate when a connection drops while running PLSQL.

Re: Session does not terminate when a connection drops while running PLSQL.

From: Karen <>
Date: Fri, 05 Jul 2002 00:21:38 GMT
Message-ID: <>

I agree with Sybrand that the first attempt to resolve problems like this one is to tune statements to run in less time. And, no doubt, the architecture another poster proposed can be difficult to maintain. No doubt, first idea is to reduce the time the trigger code has to run.

However, there exist enough cases where the statement cannot be tuned to run in suitable time and you have to either deal with killing sessions or with changing the architecture to involve another process.

Killing sessions traditionally has assorted problems. A known fact is that a session will stay alive until the client makes a call to the database. This call can be anything - for example another fetch request. If a call is not made, the session will stay there forever. If a session is executing a sql statement which does, f.e. heavy sorting, no calls are made, and no DCD will help it. DCD will also not help if you a client is a system which caches TCP connections - f.e. Windows. In this case, if the application crashes, the OS will respond to DCD packets that a connection is still alive. If you kill a session with ALTER SYSTEM KILL SESSION, you are on PMON's mercy. PMON's decision to clean up a session can take up to 24 hours - this is a fact admitted by Oracle. The bottom line is not to base your critical systems architecture on this.

Implementing an architecture like the one that was proposed is however possible if carefully planned. In many cases, the developers do not realise that the logic has to be much more complicated, and miss out something, which causes problems for the support team. The things to watch out are, f.e. - making sure there is a way to stop the process (it should poll a controlling table, dbms_alert or something like that, and ensure in its logic that it does not execute lengthy operation in between calls); - distinguish processing states for rows in the "tasks" table, which ones were processed and which ones were not. This can be really difficult to properly code.

If it is garanteed that the code to update the table will ever return, a developer can attempt to lock the rows in question in NOWAIT mode and return a message to the user that this cannot be done at this time. The user then either waits some time and resubmits a request, or goes after a DBA to clean this up. Maybe this can work for some shops in the interim, while you are working on tuning the code or implementing another architecture.

Sybrand Bakker wrote:

> On 4 Jul 2002 13:17:47 -0700, (Rauf Sarwar) wrote:
> >Generally...It is not good practice to have a client wait for "Several
> >minutes" for a database call to return. If the process (Insert,
> >Update, Delete) is going to take a long time then it is better to
> >break it up into two. First, client submits a request and saves it in
> >e.g. a holding table and returns immediately. Then that request can be
> >picked up by a background process which may take several minutes to
> >run independent of whether the calling client is still connected or
> >not.
> You aren't serious about that, aren't you?
> You just provided the number 1 recipe for a disaster database,
> requiring round the clock table maintenance.
> Problems like long-running statements should be *resolved* by tuning
> the statement.
> However, you preach symptom-fighting here, adding an extra layer of
> crap. I'm working for more than 40 hours per week now with such a
> system, where I officially work 32, and I can tell you it is
> definitely *NOT FUNNY*
> Regards
> Sybrand Bakker, Senior Oracle DBA
> To reply remove -verwijderdit from my e-mail address
Received on Thu Jul 04 2002 - 19:21:38 CDT

Original text of this message