Re: high "latch: cache buffers chains" waits in DB

From: Greg Rahn <>
Date: Tue, 26 Feb 2008 19:57:07 -0800
Message-ID: <>

On 2/26/08, <> wrote:
> If you are dealing with two separate problems it may be better to eliminate
> one of them. The first problem I would try to eliminate is the max
> processes. Start out with Shared Servers (a measure of success would be to
> know, by sampling, max number of active sessions); you could go to town by
> implementing connection pooling and multiplexing, but start with a simple
> shared server model. I have built perl/cgi based web apps and have
> successfully dealt with them using MTS/SS.
> This may give you the breather necessary to see if the second problem is
> related to above or a separate issue.
> Regards,
> -Krish

While the principle is novel, I'm going to respectfully disagree with this suggestion.

Based on the basic information provided, the high number of processes is a *symptom* of some locking, not a cause. When the locking occurs, other processes just queue up until the max process limit is hit and the db is unusable. Adding more processes would only allow this run away train to gain more speed and cause more damage, possibly rendering the host unusable and instead of a db reboot, now a hard reboot is required. In this case, resources are likely to be consumed directly proportional to their availability.

If the issue is related to locking, then Shared Servers isn't going to yield any benefit since there really isn't any time where a connection is not doing anything: either it (1) logs on, does its work, then logs off or (2) logs on, does work and is blocked. Either way, the connection would be active (as the db sees it) and would require a server process.

In this case, I think it is safe to say "A" causes "B" so fixing "A" should be the goal, not working around "B".


Greg Rahn
Received on Tue Feb 26 2008 - 21:57:07 CST

Original text of this message