Re: buffer advisor

From: Gaja Krishna Vaidyanatha <gajav_at_yahoo.com>
Date: Mon, 8 Sep 2014 07:42:39 +0530
Message-Id: <E94FEFB8-C8DE-45D3-A110-DEECD9189B93_at_yahoo.com>



Hi Mark,

Thank you my dear friend for your kinds words and support :) I love the +42, will use it going forward :)

Cheers,

Gaja

Gaja Krishna Vaidyanatha
Founder & CEO
DBPerfMan LLC
+1-650-743-6060
http://www.dbperfman.com
http://www.dbcloudman.com
Enabling Oracle Exadata, Cloud Deployment & Management on Oracle & Big Data

> On Sep 7, 2014, at 22:04, "Mark W. Farnham" <mwf_at_rsiz.com> wrote:
>
> Thanks Gaja. +42, both on your approach and your helpful specific technical notes.
>
> Another symptom to avoid, even with a very well behaved development system, is when automatic memory management reaches a very nearly steady state. You might think this would be the sweet spot for automation to keep things “just right.” But no! It is almost as if AMM had CTD! Almost invariably this leads to a slight oscillation near a balance point with just a few granules swapping back in assignment between buffer and sql. This is of course can be catastrophic since the buffer that then appears and is given back over time tends to be the hottest buffers and likewise the sql. Usually adding those few granules to the high point of both is a small price to pay (and then shutting the doggone thing off.) I am unaware of any switch to set to say “don’t make a change unless it is at least X granules” (betting that the stability is worth more than a small change) and likewise I am unaware of any “emergency pool” that could be dynamically allocated (even the PGA if really needed to prevent error and even though then thrown away), although I’ve long since made both those enhancement requests.
>
> AH! The slippery slope between automation and CTD*!
>
> mwf
>
> *CTD compulsive tuning disorder, coined and described by our friend Gaja
>
> From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Gaja Krishna Vaidyanatha (Redacted sender "gajav_at_yahoo.com" for DMARC)
> Sent: Sunday, September 07, 2014 2:41 AM
> To: exriscer_at_gmail.com; Seth Miller
> Cc: Chris Taylor; Oracle-L Freelists
> Subject: Re: buffer advisor
>
> Dear Oracle-L,
>
> This is an interesting discussion and from what has transpired thus far, it is pretty obvious that there are no draconian "rights or wrongs" here. Nevertheless, I wanted to share something relevant to the subject, as I have been dealing with this issue for over a year. Here it is:
>
> 1) At the outset, I'd like to address Cheng's question on the usability of the db advisor. I find the data (not the recommendations) useful and here is how I use it. I look at the AWR section - "Advisory Statistics" and within that "Buffer Pool Advisory". I then compare the Estimated Phys Read & Est %DBTime for Rds with the Size Factor values (11.2.0.4). The Size Factor value of 1.0 is the current allocation and it is very useful to compare this to 1.5 (50% increase in allocation) or even 1.98/2.0 (doubling existing allocation). I have used this successfully from the 10g days, when it was first introduced. The data has consistently given me valid points on over/under allocation of the various pools. I then generate the perspective and recommendation on my own.
>
> Basically, the data allows me to determine at a high-level, whether increasing a given buffer pool will result in any significant benefit. On the flip side, it also allows me to make reasonable data-driven conclusions on whether "overallocation" has occurred for any of the pools. This is again done by comparing the values of 1.0 to say 0.5. In this case, if the Phys Read and DB Time numbers are not significantly different, then it can be concluded that overallocation has in fact occurred and shrinking the pool and re-purposing the memory to another pool, can be done.
>
> ***It is very important that the above method be utilized across multiple heavy-load time AWR reports. It is never a good idea to make any significant decision/change, based on a single data point from one AWR report. ***
>
> So for the aforementioned reasons, I keep db_cache_advice ON, as I find the data that it provides relevant and useful. I use the above to make my own decisions for resizing and ignore the recommendations from Advisor(in OEM), as I have found the recommendations to be unreliable. To be fair, the Advisor has no application knowledge or context and that is exactly where we as DBAs come into the picture.
>
> 2) Like some of you, I have had to turn off ASMM completely, due to the nature of the application's SQL generation, in an effort to avoid unnecessary resize operations. I personally would rather have a static slightly over-sized shared pool, than MMAN doing its own thing and causing overhead on a very latency-sensitive application.
>
> I'm sure you folks know that just setting SGA_TARGET=0 does not turn the feature off, as ASMM will kick in, unbeknownst to you for "immediate" memory requests. We found this the hard way, as one day our db_cache was resized to automatically resized to 256MB and the shared_pool was also resized to 9GB, by these immediate requests. The original values for db_cache_size was 8GB and shared_pool_size was 512MB respectively. This was with SGA_TARGET set to 0.
>
> You can tell how much of immediate vs. deferred requests are being processed, by looking at v$sga_resize_ops. To turn off ASMM completely, you will have to set _memory_imm_mode_without_autosga to FALSE. When you do that, you potentially open yourself up for ORA-4031s, but if you size your shared pool correctly (using the Shared Pool Advisory in AWR), you can avoid the error for most part.
>
> Last year, at a customer project, ORA-4031s were occurring due to DDLs on interval-partitioned tables not being aged out (the last thing Oracle should cache in the shared pool are DDLs, but that was in fact happening). So, when a batch job generated a bunch of "alter table xxxx truncate partition for (yyyy);", the ORA-4031s showed up. We first reduced the number of sub-pools to 2 (_kghdsidx_count = 2), as the default number of sub-pools was shared_pool_size/512MB. The ORA-4031 occurred always on sub-pool #1. This was logged as a bug last year (Bug#16864042) on 11.2.0.3 and was supposed to be fixed in 11.2.0.4. Regrettably, the bug re-surfaced in the past few weeks on 11.2.0.4 and the new Bug#19461270 has been assigned.
>
> We are currently working with Oracle Development to resolve this, but in the interim we have set our shared_pool_size temporarily to 8GB and set _enable_shared_pool_durations=FALSE, basically having 1 sub-pool for all object types (instance, session, execution & cursor). As a precautionary measure, we also have a flush cron job that flushes the shared pool when free memory goes below a certain threshold.
>
> Non of this is pretty, but we'd rather pay a little extra in hard parsing, than run the risk of ASMM engaging in re-sizing operations when it shouldn't have. More often than not, the cost of additional hard parses are minuscule compared to valid/frequently used data and index blocks not present in the database buffer cache (due to resize operations). That fundamentally is the rationale behind this whole workaround. As soon as the bug related to caching DDLs is fixed, we will revert our shared_pool_size back to something reasonable (1GB) and we can turn off the flush cron job. I am an optimist and I hope that day will dawn soon :)
>
> Will keep you posted as things develop on our end!
>
> Cheers,
>
> Gaja
>
>
> Gaja Krishna Vaidyanatha,
> CEO & Founder, DBPerfMan LLC
> http://www.dbperfman.com
> http://www.dbcloudman.com
> Phone - +1 (650) 743-6060
> LinkedIn - http://www.linkedin.com/in/gajakrishnavaidyanatha
> Co-author: Oracle Insights:Tales of the Oak Table - http://www.apress.com/9781590593875
> Primary Author: Oracle Performance Tuning 101 - http://www.amzn.com/0072131454
> Enabling Exadata, Big Data and Cloud Deployment & Management for Oracle
>
> From: Ls Cheng <exriscer_at_gmail.com>
> To: Seth Miller <sethmiller.sm_at_gmail.com>
> Cc: Chris Taylor <christopherdtaylor1994_at_gmail.com>; Oracle-L Freelists <oracle-l_at_freelists.org>
> Sent: Sunday, September 7, 2014 2:45 AM
> Subject: Re: buffer advisor
>
> Hi Seth
>
> If an app does not use bind variables then no matter how shared pool is sized (we can set 1TB and hard parsing is still going on) hard parsing will always occur so have larger shared pool means hard parsing will not be reduced so I dont think ASMM should favor in such drastic way the shared pool. AMM and ASMM are features to simplify management so Oracle can tell customer that the database is easier to manage but I dont think experienced DBA should rely on them. Dont you think so?
>
> Thanks
>
>
>
>
> On Fri, Sep 5, 2014 at 7:22 PM, Seth Miller <sethmiller.sm_at_gmail.com> wrote:
> Chris,
>
> I believe "a few situations" qualify as exceptions. The developers have to write these tools to work with the majority of cases, not the exceptions. This is why the DBA still has the ability to set a minimum size for each of the managed pools.
>
> Hard parsing is extremely expensive so it doesn't surprise me that ASMM would favor the shared pool over the buffer cache, especially when you consider that there are alternatives to the buffer cache like the keep pool. There is no such in-memory alternative for the library cache.
>
> Seth Miller
>
>
> On Thu, Sep 4, 2014 at 5:50 PM, Chris Taylor <christopherdtaylor1994_at_gmail.com> wrote:
> Seth,
>
> Really? I have run into a few situations where the advisor undersizes the buffer cache significantly in favor of the shared pool because of the workload of the application.
> I've got a db right _now_ that has a 128MB buffer cache and a 20GB shared pool that AMM resized because of the workload :)
>
> Obviously, the solution to this is to set floor (minimum) values for shared_pool_size and db_cache_size but it still amazes me that ASMM/AMM will significantly undersize the buffer cache when the workload uses a lot of SQL that isn't reuseable.
>
> And I clearly recognize that the workload is suboptimal (lots of SQL with literals and a few other things) that favor a large shared pool, and my only point is that it isn't uncommon for the automatic memory resizing to size the buffer cache to an absurd size :)
>
> Chris
>
>
> On Thu, Sep 4, 2014 at 4:56 PM, Seth Miller <sethmiller.sm_at_gmail.com> wrote:
> Ls,
> I have found with very few exceptions that ASMM (SGA_TARGET) is very good at sizing the buffer cache. Have you tried this?
> Seth Miller
> On Sep 4, 2014 3:38 PM, "Ls Cheng" <exriscer_at_gmail.com> wrote:
> Hi all
>
> Has anyone used buffer cache advisory in 10g or 11g to size a production buffer cache? If so how good is the advisor recommending the cache size? Did the recommended cache size meet the ohysical reads reduction goal?
>
> TIA
>
>
>
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Sep 08 2014 - 04:12:39 CEST

Original text of this message