RE: Keep buffer cache question

From: <"">
Date: Fri, 15 Mar 2019 14:24:53 +0000
Message-ID: <0D8F4CAC0F9D3C4AACC63F50FD9957F762D7E776_at_PRDTXWPEMLMB32.prod-am.ameritrade.com>



Thank you all for you replies.

So sorry, I had been focusing on things that had changed from last week so I left off relevant information. By “a lot of physical i/o” I only meant that in the SQL ordered by Physical Reads (UnOptimized) section of the AWR reports those two queries are now showing up consistently during both slow and busy times whereas they are not showing up at all in any reports I run for last week.

In Segments by Physical Reads the tables in question appear in the same positions both last week and this week. The tables are in the #3 and #4 positions with the index used by one of the queries in the #5 position. But that hasn’t changed from the previous week. There is another table in #2 and that other table’s index in the #1 slot. Here is the information for the 2 hours with names redacted:

<table owner> <tablespace_name> <index1 on table1> INDEX 11,169,159 54.84
<table owner> <tablespace_name> <table1> TABLE 5,051,890 24.80
<table owner> <tablespace_name> <table2> TABLE 880,373 4.32*
<table owner> <tablespace_name> <table3> TABLE 595,843 2.93*
<table owner> <tablespace_name> <index2 on table 2> INDEX 520,725 2.56*

*tables and indexes involved in the 2 queries that started doing physical reads

With some sleep I’m wondering if I might get more bang for my buck by simply increasing the SGA and the db_cache_size. The tables associated with these 2queries are only responsible for about 10% of the physical reads. Maybe ALTERING table 2 to cache?

I’ve been trying to get an answer on how the data is accessed (e.g., are certain accounts always queried at the same time of day, might this behavior be related to seasonal activity, etc.) but the developers don’t know and haven’t been able to find out so far.

Meanwhile as of this morning the query against table 2 is no longer showing up in the SQL ordered by Physical Reads (UnOptimized) section so this whole thing might be a red herring. I’ll look into some of the other listerv suggestions made today. Thank you all!

Jay Miller
Sr. Oracle DBA
201.369.8355

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Chinar Aliyev Sent: Thursday, March 14, 2019 10:44 PM To: Jonathan Lewis
Cc: Oracle-L Freelists
Subject: Re: Keep buffer cache question

Hi Jay.
What does it mean 'a lot if physical i/o' , which wait events are observed for the problematic SQL statement. Have you compared average wait times of the wait events also? . For example, If the sessions are waiting for DB FILE SEQUENTIAL READ you can compare average wait time of current with wait that of previous week. If average wait time(specially for this wait ) has been increased then there is a probability some hardware/disk configurations have done. For example, moved to RAID-6. Could you confirm it? (Asking by sys admin team you can check it).

On Fri, Mar 15, 2019, 02:24 Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk<mailto:jonathan_at_jlcomp.demon.co.uk>> wrote:

Did any of the relevant segments appear in "Segments by physical reads" ? You need to find out whether it's the indexes or the tables. As a basic guideline you almost certainly WON'T beat the LRU algorithm by setting up the keep cache.

A change like this can happen simply because objects (and particularly indexes) get bigger over time as the data sizes grow. You can get into the position (especially when the number of queries grows) that a query reads a leaf block into memory but causes another leaf block from the same index to be flushed, and a few seconds later some other query wants the leaf block that was flushed.

Consider an index on (customer_id, order_date) - when the data is small index entries for "the most recent order for customer X" may find two or three customers in the same leaf block, so one query benefits from the caching caused by another. As the data gets larger you get to a position where every customer has several leaf blocks and every query for "the most recent order for customer X" has to read a different leaf block and queries don't get any benefit from each other. At this point your only solution is to increase the buffer cache to ensure that one block for each customer can stay in memory long enough for its next usage.

If you do try implementing a KEEP pool, don't forget to check for the effects of read-consistency. Depending how CR blocks are created you may find them as copies created in the default cache, or the recycle cache (if you have one), and some (because of operation "copy current to new buffer") will be in the keep cache. Sizing the keep cache to keep the blocks AND the CR blocks can be problematic. You'll have to check what actually happens because the behaviour changes with version of Oracle and I haven't checked it recently

Regards
Jonathan Lewis



From: oracle-l-bounce_at_freelists.org<mailto:oracle-l-bounce_at_freelists.org> <oracle-l-bounce_at_freelists.org<mailto:oracle-l-bounce_at_freelists.org>> on behalf of dmarc-noreply_at_freelists.org<mailto:dmarc-noreply_at_freelists.org> <dmarc-noreply_at_freelists.org<mailto:dmarc-noreply_at_freelists.org>> Sent: 14 March 2019 21:22
To: oracle-l_at_freelists.org<mailto:oracle-l_at_freelists.org> Subject: Keep buffer cache question

In doing an AWR report comparison for comparable times one major difference I saw was that 2 frequently run queries were suddenly doing a lot of physical i/o. For a comparable 2 hour period they went from 1.5 million to 1.8 million executions but physical reads increased from 0 to 1.2 million. I sampled a few other random times and this was consistent. The queries are both doing index access. One is an index range scan and the other a unique scan against the primary key.

--
http://www.freelists.org/webpage/oracle-l<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.freelists.org_webpage_oracle-2Dl&d=DwMFaQ&c=nulvIAQnC0yOOjC0e0NVa8TOcyq9jNhjZ156R-JJU10&r=aiKV3Uv2Wo7GqYQcis9TSvB1MZslPOnintrOY1rjG58&m=hBVGFdy5pjskegemDDJ0cyQXbR2n0xKZq-hd_vO_JO8&s=9gRz0UGKcLo8McOKdPx169BJY81XINIWyhvP-i_2qdw&e=>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Mar 15 2019 - 15:24:53 CET

Original text of this message