Re: library cache miss ratio very high - how to find out the rot cause

From: Vladimir M. Zakharychev <>
Date: Tue, 22 Apr 2008 02:12:05 -0700 (PDT)
Message-ID: <>

On Apr 22, 12:28 pm, Frank Zimmer <> wrote:
> Hi
> we are driving a Documentum based ECM system on a big sun machine (12
> CPU) (i know that Documentum is creating very bad SQL)
> Right now sometimes (quite often) the system is totally slow.
> With some monitoring we can see that at that point the miss ratio
> for the sql library cache goes over 60 %.
> At that time the shared SQL area of the SGA is used only by 60 % and
> the parse to execute ratio is nearly one.
> How can i get deeper into that ?
> Best regards
> Frank

What kind of activity goes on in the application when you observe these symptoms? This may give a better clue on how to approach the problem.

It's quite likely that the application floods the server with SQL statements with literals, and it most probably does not use array interface (for example, they insert 10 rows using 10 separate statements, each with unique set of literal arguments - this results in 10 hard parses and 10 new statements in the shared pool that will never be reused. The most efficient way to do this would be to prepare single insert statement with bind variables, bind array of arguments to it and execute this 10 row insert in one call. Less efficient (but sometimes the only available, like with Thin JDBC,) approach would be to execute the prepared statement 10 times with different variable values.)

You can try to set CURSOR_SHARING=SIMILAR to let Oracle attempt to "auto-bind" statements with literals and reduce the pressure on the shared pool and CPUs, but do this with caution and extensively test this feature before implementing it in production as it was known to produce bizarre side effects sometimes.


   Vladimir M. Zakharychev
   N-Networks, makers of Dynamic PSP(tm) Received on Tue Apr 22 2008 - 04:12:05 CDT

Original text of this message