Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Times spent on latches...fast responses gratefully appreciated.

Re: Times spent on latches...fast responses gratefully appreciated.

From: Oracle User <spamspamspam_at_ntlworld.com>
Date: Wed, 5 Jun 2002 20:10:32 +0100
Message-ID: <WntL8.1731$NV6.100093@news8-gui.server.ntli.net>


Thank you...

"dk" <dk_at_nowhere.com> wrote in message
news:XftL8.117970$cQ3.3639_at_sccrnsc01...
>
>
>
>
> If the application issues large quantities of SQL containing literal
values
> in the where clause, you'll find the view x$ksmlru a good place to judge
the
> "damage" concurrent users are causing in the shared pool.
>
> This view details the activity of inbound requests for shared pool as
> follows:
> klmlrsiz: amount of contiguous memory being allocated
> klmlrnum: number of objects being displaced
> ksmlrhon: name of inbound object if pl/sql or cursor
> ksmlrohv: hash value of object loaded
> ksmlrses: saddr of the loading session
>
> Each time you select from this view you'll receive 10 rows of data, which
> will immediately be purged from the view so that the next select will
> receive 10 new rows. So if an immediate second select returns 10
basically
> empty rows, there was no inbound activity in the shared pool.
>
> Using this view during heavy activity will help you judge what's going on
> inside the shared pool. If you find something particularly interesting in
> there you can take the hash value and find the full text of the sql
> statement v$sqlarea based on hash_value joined with v$sqltext.
>
> The problem I've seen with a large shared pool managing loads of literal
SQL
> statements is that the shared_pool free list becomes very fragmented over
> time and since it is so long ( 700M worth of address space/blocks in your
> case ), searching this list for a contiguous chunk of free space can take
a
> very long time. Since almost every user query ( the literal ones ) will
> cause a search on the free list first, many sessions will queue up for the
> shared_pool latch and everything will slow down. The only way to get out
of
> this nasty delima is to FLUSH the shared_pool, which essentially
> de-fragments the free list and allows the whole process to start again.
>
> Given the nature of the application, the shared_pool will fill and thus
the
> shared_pool free list will eventually become fragmented. A larger
> shared_pool will only prolong the inevitable, may worsen the overall
effect
> on the end user and makes for unpredictable performance under varying
loads.
>
> With a smaller shared_pool, you'll not have to worry about the list, as it
> will be short enough that even when it is completely and utterly
fragmented,
> the latch hold time needed to search it will be much shorter.
>
> You can use the view above to determine how long it takes from a fresh
FLUSH
> before every request starts displacing old objects. I suspect you'll find
> that your shared pool fills very quickly regardless of its size. The
> question then becomes, how long after that until my system slows to a
crawl
> due to shared_pool latch contention as a result of the size of the free
> list.
>
> As already mentioned, cursor_sharing may be something for you as well.
>
> Hth,
>
> -Kevin
>
> "Oracle User" <spamspamspam_at_ntlworld.com> wrote in message
> news:BsrL8.1585$NV6.77522_at_news8-gui.server.ntli.net...
> > Scenario...
> >
> > We have a powerful server (8 CPU / 4GB RAM).
> > We have an application who's code cannot be modified. (3rd party etc)
> > The shared pool is 700MB, overutilised by 180%
> > Most statements do not use bind variables. Most statements are different
> and
> > fill the pool etc.
> >
> > So, If I increase the size of the pool , at the expense of free
available
> > RAM, will this be less damaging than reducing the size of the shared
pool
> so
> > that more cpu cycles are used in servicing the shared pool , i.e. will
the
> > time spent on the latch be longer in adding new statements to free
memory
> in
> > a massive shared pool or will the time spent on the latch be less than
if
> I
> > had to use a small shared pool and constantly age out rubbish SQL.
> >
> > This is a non MTS environment and cursor handling is default for 8.1.7.
> > We have bags of spare RAM, and bags of spare CPU cycles available.
> >
> > I acknowledge that I am painting over rust, but I need a more pragmatic
> > approach rather than rule of thumb.
> > I am at loggerheads with several other DBA's, my approach of increasing
> the
> > shared pool to avoid shared pool overhead does not go down too well, the
> > crux of my case stands on the fact that I think and cannot prove that
> adding
> > new parsed SQL to the SP is less expensive in CPU terms than ageing out
an
> > old statement and inserting the new statement. Whilst I know my approach
> is
> > an horrific waste of memory, we have spare memory in a wildly
overspecced
> > server and so this should be removed as a consideration.
> >
> > If only N*ku would write reusable SQL...If only I could determine exact
> > times spent on the latch adding a new statement to a pool with free
chunks
> > available or times spent on a latch ageing out a statement etc.
> >
> > Thanks in advance...
> >
> >
> >
> >
>
>
Received on Wed Jun 05 2002 - 14:10:32 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US