Fwd: suddenly direct path read from small tables

From: samuel guiñales cristobal <samuelg.cristobal_at_gmail.com>
Date: Mon, 24 Oct 2016 12:08:49 +0200
Message-ID: <CAESzQCE3PeFH1O7UUXL+a+S97hRQGafaz=FYw5E8VisZzAEc3g_at_mail.gmail.com>



yes, my mistake to paste, cause this problem afecting two databases(same direct path read io eait issue) i pasted wrong the other databsase info sorry i correct:

table is 41MB, number of rows: 35473

SELECT name,block_size,buffers FROM v$buffer_pool; ---> 120832 buffers SELECT blocks FROM dba_tables WHERE table_name = 'NOMBRE_TABLA''; --> 5158

_small_table_threshold                                 2416




---
Samuel Guiñales Cristobal <samuelg.cristobal_at_gmail.com>
«Que tu alimento sea tu medicina, y la medicina tu alimento.» Hipócrates

On 24 October 2016 at 11:47, Ls Cheng <exriscer_at_gmail.com> wrote:


> Hi
>
> You only have 1GB db cache and 1680MB sga max size but in your first post
> you said you have 1451136 database memory buffer which is 11337MB if you
> have 8K db block size which is clearly wrong. That confused probably all of
> us because if your table is 273MB clearly it passes the 2% threshold (it's
> actually 27% of 1024MB) therefore it's perfectly a direct path read
> candidate. Why it does so probably it changed from index excution plan to
> FTS plan.
>
> BR
>
>
> On Mon, Oct 24, 2016 at 11:40 AM, samuel guiñales cristobal <
> samuelg.cristobal_at_gmail.com> wrote:
>
>> Hi Cheng,
>>
>> ASMM no doubt. allways same little mem, why now problems?
>>
>> sga_max_size big integer 1680M
>> sga_target big integer 0
>>
>> NAME SIZE MB
>> -------------------- ----------
>> Shared Pool Size 480
>> Large Pool Size 32
>> Java Pool Size 128
>> sga_max_size: 1680
>> shared_pool_size: 480
>> large_pool_size: 32
>> java_pool_size: 128
>> db_cache_size: 1024
>> log_buffer: 6.140625
>> sort_area_size: .0625
>> pga_aggregate_target: 256
>>
>> % Free SharedPool
>> -----------------
>> 11.54
>>
>> NAME POOL SIZE MB
>> -------------------- ------------ ----------
>> free memory shared pool 32.9287186
>> free memory large pool 32
>> free memory java pool 128
>> row cache shared pool 8.23996735
>>
>>
>>
>>
>> ---
>> Samuel Guiñales Cristobal <samuelg.cristobal_at_gmail.com>
>> «Que tu alimento sea tu medicina, y la medicina tu alimento.» Hipócrates
>>
>> On 24 October 2016 at 11:28, Ls Cheng <exriscer_at_gmail.com> wrote:
>>
>>> Hi
>>>
>>> AMM => memory_target
>>> ASMM => sga_target
>>>
>>> Do you use one of them? I guess so otherwise you would not have 1451136
>>> db block buffers with 1GB db_cache_size. What Martin is suggesting is that
>>> if you dont have a large enough minimum db_cache_size then due to ASMM or
>>> AMM nature you could end up with a quite small db cache size therefore a
>>> table which was ont 2% of db cache can suddently become so and become a
>>> direct path read candidate.
>>>
>>> Aggregate v$sgastat by pool and show us the output we we can see the
>>> current sga distribution.
>>>
>>> BR
>>>
>>>
>>> On Mon, Oct 24, 2016 at 11:21 AM, samuel guiñales cristobal <
>>> samuelg.cristobal_at_gmail.com> wrote:
>>>
>>>> Hi Martin,
>>>>
>>>> >Not only the number of blocks in cache can be very volatile (and quite
>>>> small when the instance started).
>>>> >Do you have _small_table_threshold set in your spfile or is it
>>>> calculated at startup?
>>>> not in spfile, calculated by instance.
>>>>
>>>> >If not, do run ASMM? If yes, do you set a minimum value for buffer
>>>> cache?
>>>> no AMM, last start buffer cache size:
>>>> db_cache_size big integer 1G
>>>> never change in this database
>>>>
>>>>
>>>> >As _small_table_threshold is derived from buffer bache size (2%
>>>> afaik), and volatile buffer cache size, it just can happen it was different
>>>> >in previous instances, and just due to the crash other parameters are
>>>> effective now.
>>>> >You can check this in your instances alert.log
>>>>
>>>> checked all starts in alertlog, allways 1G db_cache_size, no volatil
>>>> memory change (no ALTER SYTEM of memory in alertlog)
>>>>
>>>>
>>>> >I just threw a lot of details in your direction, maybe let's start to
>>>> clarify if it's an optimizer/Plan issue or direct/scattered read?
>>>> thanks Martin, after woraround we fix problem, but FTS continue in
>>>> memory :(
>>>>
>>>>
>>>> ---
>>>> Samuel Guiñales Cristobal <samuelg.cristobal_at_gmail.com>
>>>> «Que tu alimento sea tu medicina, y la medicina tu alimento.» Hipócrates
>>>>
>>>> On 23 October 2016 at 21:28, Martin Berger <martin.a.berger_at_gmail.com>
>>>> wrote:
>>>>
>>>>> Samuel,
>>>>>
>>>>> there can be various reasons why a direct path read is preferred
>>>>> against filling buffer cache.
>>>>> Just as a reference, here is Tanel Põders article about it.
>>>>> http://blog.tanelpoder.com/2012/09/03/optimizer-statistics-d
>>>>> riven-direct-path-read-decision-for-full-table-scans-_direct
>>>>> _read_decision_statistics_driven/
>>>>>
>>>>> First of all it's not an optimizer decision. That's the reason why
>>>>> rebuilding index didn't help.
>>>>> As Tanel (and your workaround) showed, new/different statistics can
>>>>> change the behaviour.
>>>>>
>>>>> So let's separate things:
>>>>> You jump from "direct path read" to "FTS" - can you please check, if
>>>>> the PLAN changed?
>>>>> In this case you have to identify, why the optimizer picked a
>>>>> different plan.
>>>>>
>>>>> Another question would be why it flips from scattered read to direct
>>>>> read. That's more about the value of _small_table_threshold, number of
>>>>> blocks in buffer cache and others.
>>>>> Not only the number of blocks in cache can be very volatile (and quite
>>>>> small when the instance started).
>>>>> Do you have _small_table_threshold set in your spfile or is it
>>>>> calculated at startup?
>>>>> If not, do run ASMM? If yes, do you set a minimum value for buffer
>>>>> cache?
>>>>> As _small_table_threshold is derived from buffer bache size (2%
>>>>> afaik), and volatile buffer cache size, it just can happen it was different
>>>>> in previous instances, and just due to the crash other parameters are
>>>>> effective now.
>>>>> You can check this in your instances alert.log
>>>>>
>>>>> I just threw a lot of details in your direction, maybe let's start to
>>>>> clarify if it's an optimizer/Plan issue or direct/scattered read?
>>>>>
>>>>> Then we can dig deeper.
>>>>>
>>>>> best regards,
>>>>> Martin
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 2016-10-23 20:31 GMT+02:00 samuel guiñales cristobal <
>>>>> samuelg.cristobal_at_gmail.com>:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Recently we suffer a storage problem in RAC, all instances down.
>>>>>> After recover, detected high "direct path read" from some
>>>>>> small tables in some data bases, OS I/O wait 20-40%, hard performance
>>>>>> problem.
>>>>>>
>>>>>> Rebuilding indexes and runing statistics in this tables not solve
>>>>>> problem.
>>>>>> execution plan changed to do Full table scan and index not used like
>>>>>> before.
>>>>>>
>>>>>> maybe optimizer not involve and is parameter _small_table_threshold?
>>>>>> but why now start psyh reads?
>>>>>>
>>>>>> *some info of one database and workaround :*
>>>>>>
>>>>>> version............................................: 11.2.0.4
>>>>>> OS....................................................: Linux x86-64
>>>>>> info: no tunning and diagnostic pack
>>>>>> _small_table_threshold..............: 29022
>>>>>> Table with high psy rds.(almos all, 95%)
>>>>>> -Table size................: 273M
>>>>>> -Table block num .....: 34730
>>>>>> db memory buffers............................ : 1451136
>>>>>>
>>>>>> we apply *workaround*,we fake table stats so it looks like that
>>>>>> there’s only 20K blocks for that table so avoid phy reads:
>>>>>>
>>>>>> EXEC DBMS_STATS.SET_TABLE_STATS('user','table_name',*numblks=>20000*);
>>>>>>
>>>>>> any idea to solve that FTS? why after storage problem?
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>>
>>>>
>>>
>>
>
-- http://www.freelists.org/webpage/oracle-l
Received on Mon Oct 24 2016 - 12:08:49 CEST

Original text of this message