Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Client performance.

Re: Client performance.

From: Greg Stark <greg-spare-1_at_mit.edu>
Date: 2000/06/24
Message-ID: <871z1n59m7.fsf@HSE-Montreal-ppp139421.sympatico.ca>#1/1

> > Also setting sqlnet.expire_time at 1 is something you definitely shouldn't
> > do. This means the server polls the clients sessions for dead connections
> > *every minute* which simply means a high overhead.

  1. how could once per minute be considered "high overhead". These systems are capable of handling thousands of packets per *second*. Once per minute is ludicrously low granularity. There's no particular reason it shouldn't be possible to use a 1s timeout on systems where all connections should be handling more than 1 transaction/s. If that's high overhead for Oracle then something is drastically wrong with Oracle's network layer.
  2. Properly implemented this feature wouldn't send a ping unless there hasn't been any traffic on the connection for a minute. Is this how it works or is there a way to get it to work like this?
  3. This feature also seems to sometimes kill connections that are still connected fine but just idle. That seems completely broken to me. It defeats the whole purpose of the feature since a non-idle connection that becomes broken should be detected immediately through other timeouts than a ping test in a properly working network layer.

The reason people want to set it to small values is that Oracle has some major bugs dealing with network errors. It often fails to notice network errors that should immediately cause the connection to close and continues waiting on a closed file descriptor or listening on file descriptors that it doesn't respond to.

--
greg
Received on Sat Jun 24 2000 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US