Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Question Regarding MTS Server Behavior

Re: Question Regarding MTS Server Behavior

From: Ronald Rood <devnull_at_ronr.nl>
Date: 3 Jun 2003 04:26:05 -0700
Message-ID: <67ce88e7.0306030326.2e104ebb@posting.google.com>


Phil Singer <psinger1_at_chartermi.net> wrote in message news:<3EDB265B.68CCF69E_at_chartermi.net>...
> We are using a 8.1.7. dedicated server configuration, with
> ADO managing connection pools. At unpredictable moments,
> the latency in out TCP network increases to where
> connections start timing out. When this happens, ADO
> thinks the sever has dropped it, and never uses that
> connection again, while Oracle waits for DCD to happen

Best would be to solve the network problem ;-) I don't know about the mts behaviour in this situation but why not just make the expire_time shorter ? (sqlnet.expire_time=N) N in minutes.
It could as well make things a little worse since it is generating some extra traffic ...

regards,
Ronald.



http://ronr.nl/unix-dba Received on Tue Jun 03 2003 - 06:26:05 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US