RE: cache buffer chains/where in code (multiple user real time monitor of same data)

From: Mark W. Farnham <>
Date: Wed, 16 Dec 2009 06:51:08 -0500
Message-ID: <>

2-3 seconds? For reality comparison sar posts new data every 5 seconds by default, if I remember correctly.

In any case, if there is some real need (implying dollar value to justify cost) to have multiple people get updates like this, then write some variety of a daemon program so that only one process is querying Oracle for the update at a set frequency (possibly for a set schedule as well) and then a lightweight client to read the answer from whereever you elect to shove the results. (For example a shared memory location or if sufficient for the demand a simple file location.)

non-technical aside: When the request goes through the budget request, the frequent monitors are very likely to get asked if they don't have some more important to do than look at a number change. I'm sure there are exceptions, but then it is probably a tiny number of observers that is justified.

-----Original Message-----
From: [] On Behalf Of Sent: Wednesday, December 16, 2009 4:31 AM To: oracle-l
Subject: Re: cache buffer chains/where in code

good good. It was just a check.

Ok, next tip:

In my experience queries attacking the same buffer(s) show up in ASH and AWR reports as being executed lot of times and being quite heavy on buffer gets.
Especially watch out for the *same query* executed lot of times at the *same time* by *many sessions*.
Once I just proofed a simple case:

for a given query taking hundreds buffers gets per row and returning about 15 rows
and for a given hardware
it took about 30 parallel sessions to explode "cache buffer chains" waits. Up to 30 sessions the server performed ok. The same query managed to perform about 60 sessions in parallel on a better server.

This is the point: scalability works only this far for a particular load on that particular hardware.

The other point is to look into application design and eliminate many database sessions doing essentially the same task again and again. I my case it was many user sessions polling the same data at some 2-3 seconds interval just to get "real time" data display on their screens. One day number of users increased to the point where server just stood still.

The solution was both to reconsider polling interval and the design in general and to tune the query.

Please consider the environment before printing this e-mail

             Christo Kutrovsky                                             
   >                                                To 
             2009.12.11 17:20                                           cc 
                                       oracle-l <>   
                                       Re: cache buffer chains/where in    

There are no connects/disconnects at that time.

On Fri, Dec 11, 2009 at 2:16 AM, <> wrote:.

      hehe, I concur.
      And by the same chance are there many connects and specially
      going on in a short time?


      Please consider the environment before printing this e-mail

            Greg Rahn


            Sent by:                  Christo Kutrovsky
            oracle-l-bounce_at_f         <>
                                      Martin Berger
            2009.12.11 01:24          Poder <>,
            Please respond to         Re: cache buffer chains/where
            greg_at_structuredda         code
By chance is this using UFS file system and not using directio (forcedirectio)? On Thu, Dec 10, 2009 at 11:45 AM, Christo Kutrovsky <> wrote: > I traced down the problem to a Solaris 10 BUG. bug_id 6642475, has to do > with kernel locks when trying to allocate contiguous memory. The code is > inefficient, and the workaround is to disable it via echo > "pg_contig_disable/W 1" | mdb -kw. > > I hope this helps someone out there. I don't know in what release it is > resolved. -- Regards, Greg Rahn -- --
Christo Kutrovsky
Senior Consultant
I blog at


Received on Wed Dec 16 2009 - 05:51:08 CST

Original text of this message