Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: X$ksmsp (OSEE on Solaris 8)

Re: X$ksmsp (OSEE on Solaris 8)

From: Sai <>
Date: Sun, 23 Jul 2006 23:54:28 -0700 (PDT)
Message-ID: <>

This is my first post here, pardon me if I sent it incorrectly.

Memory chunk allocation/de-allocation is protected by 7 shared pool latches, although it uses one latch to start with, and as the number of memory chunks increases Oracle vertically partitions the shared pool with more latches up to a maiximum of 7.

You can see how many shared pool latches are in effect by running "select count(distinct ksmchidx) from x$ksmsp" query.

If you suspect shared pool memory is leaking and if you can afford to go after x$ksmsp as it has potential to cause database outage, periodically run the following query:

select ksmchcom, ksmchcls, sum(ksmchsiz), count(*) from x$ksmsp;

and look for any abnormal trend. For example, Oracle 8i had a bug on permanent memory and in that case, total permanent memory from x$ksmsp would go up over time.

Total shared pool reported in x$ksmsp would be more than actual shared pool setting because memory required for database block buffer header's and other fixed arrays are also included in that.


Re: X$ksmsp (OSEE on Solaris 8)

A couple of months ago, Oracle Support sent me a query to run against
x$ksmsp in order to identify shared pool fragmentation. They assured me
that any problems with querying x$ tables were from earlier versions of
Oracle. The local technical sales rep also assured us that there should be
no issues. I was born in the morning but I wasn't born YESTERDAY morning,
so I was nervous about querying that table directly and, after testing the
query in a non-Production environment, verified that, not only did the query
hold the shared pool latch, but it took an hour for the query to run.
Couldn't log onto the database from another session. Could have been quite
painful in Production.

Incidently, we seem to have reached reasonable stability in our RAC
environment. After suffering for months with instance crashes due to
ORA-04031, Oracle Support recommended that we set _lm_res_cache_cleanup=70.
We implemented that in early May and haven't had any crashes since. We do
still have a possible memory leak due to automatic statistics gathering
which shows up as a continually increasing value for MISCELLANEOUS in
V$SGASTAT for the shared pool. When it reaches 900-Mb (of 1300-Mb
shared_pool_size), we plan an off-hour bounce of that instance. Takes about
6 weeks for MISCELLANEOUS to reach 900-Mb. The other two instances in the
cluster don't seem to have the same problem. Those instances have been up
for almost 8 weeks now. Instances used to crash after 3 weeks on average.
Database stability is a wonderful thing.

Mark Strickland
Next Online Technologies
Seattle, WA

On 6/28/06, Schultz, Charles <sac_at_xxxxxxxxxxxxx> wrote:

    I looked at v$sgastat, but it is was too general. We have fragmentation

    issues (in the shared pool, I believe) and Oracle is saying that we have

    a potential memory leak (still in the diagnosis phase). Hence, I think

    the PGA and Buffer pool views are out, although I could be wrong. The

    ora-4031 trace files are reporting errors on the following objects:


    Of course, one of the most confusing problems with this fragmentation

    issue is whether to decrease or increase the shared pool. Increasing the

    shared pool has the temporary affect of making the ora-4031 errors

    disappear, but that seems to be a bad long-term solution, as decreasing

    the shared pool might actually be the better way to go. My one caveat

    with this approach (resizing the shared pool) ignores the root cause of

    the problem - if the fragmentation is avoidable, why not avoid it? I am

    still trying to learn more about this concept - even though I have read

    a lot (Tom Kyte, Jonathan Lewis, etc), the material is sinking in

    slowly. From talks I have had offline, this might be a case of

    contention on a shared pool heap latch - a requestor wants a certain

    size chunk and the latch for the size chunk is busy. My memory of the

    details might be fuzzy.

    I ran across note 367392.1, but all of our traces are from foreground

    processes, not background.

    Following note 146599.1, I peeked at V$SHARED_POOL_RESERVED but did not

    learn much (one size that has failed a number of times, 4200). Also,

    this note points to the x$ tables, hence my original question about

    x$ksmsp. If the performance is so bad and there are better alternatives,

    I am surprised that they are not listed here.

    And finally note 62143.1. I am still re-reading this one, as I still

    have much to learn in "tuning the shared pool". This is a good appendix

    for terms and offers various scripts, but none that I found to be very


    Other references:
    "Understanding Shared Pool Memory Structures" Russell Green, Sep 2005

    Oracle white paper
    Scripts from Alejandro Vargas' blog

    -----Original Message-----
    From: Mladen Gogala [mailto: gogala_at_xxxxxxxxxxxxx]     Sent: Tuesday, June 27, 2006 7:47 PM     To: Schultz, Charles

    Cc: duncan.lawie_at_xxxxxxxxxxxxxxxxx; Hallas, John,
Tech Dev;

    Subject: Re: X$ksmsp (OSEE on Solaris 8)

    On 06/27/2006 10:30:11 AM, Schultz, Charles wrote:
> What is the alternative to track down memory
issues? Sure, one could

> use DMA (Direct Memory Access), but I for one am
not there yet. If

> there is a better way to diagnose and resolve
memory issues, I am all

> ears (or rather, eyes *grin*).

    Track what memory issues? Insufficient shared pool? Try with V$SGASTAT.

    PGA? Try with V$PROCESS_MEMORY. Buffer cache? Try with

    What do you have in mind when you say "memory issues"? All those tables

    are well documented and stable.

Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
Received on Mon Jul 24 2006 - 01:54:28 CDT

Original text of this message