Re: suddenly direct path read from small tables

From: Ls Cheng <exriscer_at_gmail.com>
Date: Mon, 24 Oct 2016 11:28:29 +0200
Message-ID: <CAJ2-Qb915tusf+1=htPExS8zKZDJSpbL2Qq-yzR2g=txt_MW0A_at_mail.gmail.com>



Hi

AMM => memory_target
ASMM => sga_target

Do you use one of them? I guess so otherwise you would not have 1451136 db block buffers with 1GB db_cache_size. What Martin is suggesting is that if you dont have a large enough minimum db_cache_size then due to ASMM or AMM nature you could end up with a quite small db cache size therefore a table which was ont 2% of db cache can suddently become so and become a direct path read candidate.

Aggregate v$sgastat by pool and show us the output we we can see the current sga distribution.

BR

On Mon, Oct 24, 2016 at 11:21 AM, samuel guiñales cristobal < samuelg.cristobal_at_gmail.com> wrote:

> Hi Martin,
>
> >Not only the number of blocks in cache can be very volatile (and quite
> small when the instance started).
> >Do you have _small_table_threshold set in your spfile or is it calculated
> at startup?
> not in spfile, calculated by instance.
>
> >If not, do run ASMM? If yes, do you set a minimum value for buffer cache?
> no AMM, last start buffer cache size:
> db_cache_size big integer 1G
> never change in this database
>
>
> >As _small_table_threshold is derived from buffer bache size (2% afaik),
> and volatile buffer cache size, it just can happen it was different
> >in previous instances, and just due to the crash other parameters are
> effective now.
> >You can check this in your instances alert.log
>
> checked all starts in alertlog, allways 1G db_cache_size, no volatil
> memory change (no ALTER SYTEM of memory in alertlog)
>
>
> >I just threw a lot of details in your direction, maybe let's start to
> clarify if it's an optimizer/Plan issue or direct/scattered read?
> thanks Martin, after woraround we fix problem, but FTS continue in memory
> :(
>
>
> ---
> Samuel Guiñales Cristobal <samuelg.cristobal_at_gmail.com>
> «Que tu alimento sea tu medicina, y la medicina tu alimento.» Hipócrates
>
> On 23 October 2016 at 21:28, Martin Berger <martin.a.berger_at_gmail.com>
> wrote:
>
>> Samuel,
>>
>> there can be various reasons why a direct path read is preferred against
>> filling buffer cache.
>> Just as a reference, here is Tanel Põders article about it.
>> http://blog.tanelpoder.com/2012/09/03/optimizer-statistics-
>> driven-direct-path-read-decision-for-full-table-scans-
>> _direct_read_decision_statistics_driven/
>>
>> First of all it's not an optimizer decision. That's the reason why
>> rebuilding index didn't help.
>> As Tanel (and your workaround) showed, new/different statistics can
>> change the behaviour.
>>
>> So let's separate things:
>> You jump from "direct path read" to "FTS" - can you please check, if the
>> PLAN changed?
>> In this case you have to identify, why the optimizer picked a different
>> plan.
>>
>> Another question would be why it flips from scattered read to direct
>> read. That's more about the value of _small_table_threshold, number of
>> blocks in buffer cache and others.
>> Not only the number of blocks in cache can be very volatile (and quite
>> small when the instance started).
>> Do you have _small_table_threshold set in your spfile or is it calculated
>> at startup?
>> If not, do run ASMM? If yes, do you set a minimum value for buffer cache?
>> As _small_table_threshold is derived from buffer bache size (2% afaik),
>> and volatile buffer cache size, it just can happen it was different in
>> previous instances, and just due to the crash other parameters are
>> effective now.
>> You can check this in your instances alert.log
>>
>> I just threw a lot of details in your direction, maybe let's start to
>> clarify if it's an optimizer/Plan issue or direct/scattered read?
>>
>> Then we can dig deeper.
>>
>> best regards,
>> Martin
>>
>>
>>
>>
>> 2016-10-23 20:31 GMT+02:00 samuel guiñales cristobal <
>> samuelg.cristobal_at_gmail.com>:
>>
>>> Hi all,
>>>
>>> Recently we suffer a storage problem in RAC, all instances down.
>>> After recover, detected high "direct path read" from some
>>> small tables in some data bases, OS I/O wait 20-40%, hard performance
>>> problem.
>>>
>>> Rebuilding indexes and runing statistics in this tables not solve
>>> problem.
>>> execution plan changed to do Full table scan and index not used like
>>> before.
>>>
>>> maybe optimizer not involve and is parameter _small_table_threshold?
>>> but why now start psyh reads?
>>>
>>> *some info of one database and workaround :*
>>>
>>> version............................................: 11.2.0.4
>>> OS....................................................: Linux x86-64
>>> info: no tunning and diagnostic pack
>>> _small_table_threshold..............: 29022
>>> Table with high psy rds.(almos all, 95%)
>>> -Table size................: 273M
>>> -Table block num .....: 34730
>>> db memory buffers............................ : 1451136
>>>
>>> we apply *workaround*,we fake table stats so it looks like that there’s
>>> only 20K blocks for that table so avoid phy reads:
>>>
>>> EXEC DBMS_STATS.SET_TABLE_STATS('user','table_name',*numblks=>20000*);
>>>
>>> any idea to solve that FTS? why after storage problem?
>>>
>>> Regards
>>>
>>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Oct 24 2016 - 11:28:29 CEST

Original text of this message