Re: latch: library cache in 10.2.0.4

From: Tim Gorman <tim_at_evdbt.com>
Date: Sun, 01 Jul 2012 19:21:29 -0600
Message-ID: <4FF0F799.7060105_at_evdbt.com>



John,
If you don't have hard-parsing issues, then I tend to look towards some other action requiring locking of the Shared Pool, and that brings to mind Automatic SGA Mgmt. If the SGA_TARGET = 0, then please disregard the rest of this response...

If SGA_TARGET > 0, then please try the attached script (i.e. "sgar.sql") which queries the GV$SGA_RESIZE_OPS view and tries to display what might be happening in terms of the history of resizings of SGA components. If I see the timing of a "shrink" operation of the Shared Pool coincide with the timing of this contention, especially if they are happening close together chronologically, then I like to examine the history of the value of the "__shared_pool_size" parameter (i.e. using DBA_HIST_PARAMETER) and explicitly set the SHARED_POOL_SIZE parameter toward the max of all those values.

Please note that the SGA parameters (i.e. SHARED_POOL_SIZE, DB_CACHE_SIZE, etc) become "floor" values instead of hard settings when SGA_TARGET > 0, so setting SHARED_POOL_SIZE = 3072M means that the Shared Pool can be resized larger than 3G, but not less than. So, look at the history of automatic resizings, and set a "floor" value for the Shared Pool toward the max of previously observed values. The contention you're seeing, if it is caused by shrinkages of the Shared Pool, should be ample evidence that shrinkage is a bad thing.

Hope this helps, and my apologies if it is way off base...

Thanks!

-- 
Tim Gorman
consultant -> Evergreen Database Technologies, Inc.
postal     => PO Box 352151, Westminster CO 80035
website    => http://www.EvDBT.com/
email      => Tim_at_EvDBT.com
mobile     => +1-303-885-4526
fax        => +1-303-484-3608
Lost Data? => http://www.ora600.be/ for info about DUDE...






On 6/29/2012 9:55 AM, John Darrah wrote:

> I am seeing periods of extreme contention on shared pool and library
> cache latches in one of my production databases. The periods are
> intermittent with long gaps between incidents. This contention usually
> only lasts a few seconds but on one occasion required a bounce of the
> database to clear the contention. I was not present for any of these
> incidents unfortunately but looking back in the AWR for the period
> requiring a bounce does show some interesting information.
>
> Taken from dba_hist_active_sess_history:
>
> sample time
> event p1
> p2 distinct number of sessions sum of time
> waited (seconds)
> 18-JUN-12 10.35.59.463 AM
> latch: library cache 12875402112 215 2
> 61.78762
> 18-JUN-12 10.35.59.463 AM
> latch: library cache 12875401152 215 5
> 144.05647
> 18-JUN-12 10.35.59.463 AM
> latch: library cache 12875401792 215 8
> 278.460855
> 18-JUN-12 10.35.59.463 AM
> latch: library cache 12875404032 215 12
> 373.605342
> 18-JUN-12 10.35.59.463 AM
> latch: library cache 12875402432 215 13
> 416.581586
> 18-JUN-12 10.35.59.463 AM
> latch: shared pool 4295924840 214 22
> 791.216218
> 18-JUN-12 10.35.59.463 AM
> latch: library cache 12875400512 215 32
> 907.177485
>
> The output above shows there were 32 sessions all trying to get the
> library cache latch at address 12875400512 and in that sample spent a
> total of 907 seconds trying for that latch. Most of the sql_ids
> listed are NULL and I’m not really sure what that means. I would
> think that for a library cache latch I should have an sql_id but I
> could just be completely misinterpreting when library cache latches
> are acquired.
>
> So my questions are: Why woud there be so many sessions apparently
> not parsing SQL statements contending for library cache latches and
> has anyone seen this behavior elsewhere?
>
> Thanks,
>
> John
> --
> http://www.freelists.org/webpage/oracle-l
-- http://www.freelists.org/webpage/oracle-l
Received on Sun Jul 01 2012 - 20:21:29 CDT

Original text of this message