Re: Oracle lock timeouts

From: Charlie Toor <toor_at_hugh_fraser.dofasco.ca>
Date: 6 Aug 93 12:16:27
Message-ID: <1993Aug6.121629.3501_at_eisgen.dofasco.ca>


I agree with the heartbeat idea. TCP keepalives are too generous for client/server connectivity monitoring. What amazes me is that killing the server process seems to be the state-of-the-art solution today for database packages that claim to be client/server. This is totally unacceptable for any application that's event close to being realtime. We're developing applications for use in production scheduling, order handling, etc. where waiting for a 30 minute record lock timeout or an lost connection monitor costs us money. I find it difficult to believe that a simple client/server "still there?" background communications isn't already built into the tools to ensure that dead clients have all their server resources released, which led me to believe that I'd overlooked something and prompted me to post my original message.

I also don't beleive reducing the keepalive timer will help the problem if the client application crashes but the system remains running. I believe the probe initiated by a keepalive expiry is responded to by the tcp kernel on the other end, not the application connected to by the socket. In many (most) cases, FTP Software's TCP/IP implementation thinks tasks still have connections event though they've died (ie. through a gpf, three-finger-salute on Windows 3.1, etc.). I believe the server will still think there's a valid connection. Which just reinforces the need for the "heartbeat" chatter between the client and server.

I don't believe this problem is unique to Oracle. I've seen identical situations with Sybase client/server as well. Kind of scary if this is as good as it gets. Received on Fri Aug 06 1993 - 12:16:27 CEST

Original text of this message