Re: cache buffer chains/where in code

From: Greg Rahn <greg_at_structureddata.org>
Date: Fri, 27 Nov 2009 20:37:29 -0800
Message-ID: <a9c093440911272037g12e3b69k2432121d83786857_at_mail.gmail.com>



400 sessions seems very excessive for this hardware (how many and what model are the CPUs?, what does cpu_count show if defaulted). I've seen numerous systems that run significantly better when they reduce the number of connections/sessions significantly. Most think that more == better, and that is usually not the case. Generally I refer to this scenario as being "over processed".

I'd be interested to know if the issue still appears with a reduced number of sessions. I'd suggest to experiment what is the minimal number of sessions required to keep the response times acceptable and how that impacts the CPU usage and run queue. As a starting point I'd use 1 session per CPU core (thread in the case of the CMT processors).

On Fri, Nov 27, 2009 at 11:18 AM, Christo Kutrovsky <kutrovsky.oracle_at_gmail.com> wrote:
> I've analyzed ASH data for problem period, usually there's 10-20 sesions for
> each sample. When this happens, there's near 400 sesions, with 250 of them
> waiting on the same latch/latch address, and 170 "ON CPU".
>
> So that drives me towards Greg's suggestion that it could be a deep CPU
> run-queue issue. This can be comfired with your suggestions of capturing
> vmstat/prstat information.
>
> I wonder what is the correct approach here to prevent deep CPU run-queues
> from causing latch contention, considering UltraSparc T2 CMT cpus. Reduce
> the number of sessions? Implement resource manager?

-- 
Regards,
Greg Rahn
http://structureddata.org
--
http://www.freelists.org/webpage/oracle-l
Received on Fri Nov 27 2009 - 22:37:29 CST

Original text of this message